Skip to main content

✍️ Prompt Engineering

Asking questions the right way

The Art of Asking Questions

When talking to a friend, you naturally adjust how you ask questions based on what you need. You give context, specify format, and clarify expectations.

Prompt engineering is the same skill applied to AI.

It's the practice of crafting inputs (prompts) to get useful, accurate outputs from language models. The same model can give vastly different responses depending on how you ask.

A vague question gets a vague answer. A well-structured prompt often gets you closer to what you need.


Why Prompts Matter

The same LLM with different prompts:

Vague Prompt:

Tell me about Python.
→ "Python is a programming language..." (generic intro)

Specific Prompt:

Write three code examples showing Python list comprehensions,
progressing from simple to advanced. Add comments explaining each.
→ [Exactly what you asked for]

The model doesn't change. Your prompt is the lever.


Core Prompting Techniques

1. Be Specific and Explicit

Bad:  "Summarize this article."

Good: "Summarize this article in 3 bullet points, each under 20 words,
       focusing on the key business implications."

2. Provide Examples (Few-Shot Prompting)

Convert these sentences to formal English:

Input: "gonna grab some food"
Output: "I am going to get something to eat."

Input: "wanna hang out later"
Output: "Would you like to spend time together later?"

Input: "gotta run"
Output:

The model learns the pattern from your examples.

3. Chain of Thought (CoT)

Ask the model to explain its reasoning step by step:

Bad: "What is 17 * 24?"

Good: "What is 17 * 24? Think through this step by step."
→ "First, 17 * 20 = 340. Then 17 * 4 = 68. So 340 + 68 = 408."

CoT dramatically improves accuracy on complex reasoning tasks.

4. Role Assignment

"You are a senior software engineer reviewing code for security issues.
Analyze this code and list potential vulnerabilities:"

Setting a role primes the model to respond from that perspective.

5. Output Format Specification

"Generate a list of 5 startup ideas.
Return as JSON with fields: name, description, target_market."
[
  {
    "name": "MealMate",
    "description": "AI-powered meal planning based on dietary restrictions",
    "target_market": "Health-conscious professionals"
  }
]

Advanced Techniques

Structured Prompts

Break complex instructions into clear sections:

## Task
Analyze the following customer review.

## Context
This is for an e-commerce electronics store.

## Output Format
1. Sentiment (positive/negative/neutral)
2. Key issues mentioned (bullet list)
3. Suggested response (2-3 sentences)

## Review
"{review_text}"

Self-Consistency

Run the same prompt multiple times and take the majority answer. Improves accuracy on reasoning tasks.

Prompt Chaining

Break complex tasks into multiple steps:

Step 1: "Extract the main claims from this article"
→ [claims]

Step 2: "For each claim, rate how well-supported it is"
→ [ratings]

Step 3: "Write a summary highlighting unsupported claims"
→ [final output]

Temperature Control

  • Lower temperature: More deterministic, consistent outputs
  • Higher temperature: More creative, varied outputs

Common Patterns by Task

TaskPrompt Pattern
Classification"Classify into: [A, B, C]. Respond with just the category."
Extraction"Extract [fields] from text. Return as JSON."
Summarization"Summarize in [N] sentences, focusing on [aspect]."
Code generation"Write a [language] function that [does X]. Include error handling."
Translation"Translate to [language]. Maintain technical terminology."
Rewriting"Rewrite in [style/tone]. Keep the meaning identical."

Common Mistakes and Gotchas

Being Too Vague

Bad:  "Make this better."
Good: "Improve this paragraph by making sentences shorter,
       removing jargon, and adding a concrete example."

Overloading the Prompt

Don't ask for 10 things at once. Models handle focused prompts better.

Bad:  "Write a blog post about X, optimize for SEO, add jokes,
       include statistics, make it formal but casual..."

Good: "Write a 500-word blog post about X for a technical audience."
      [Then follow-up prompts for refinement]

Ignoring Context Limits

Long prompts can exceed the context window. The model forgets content that gets truncated. Keep prompts focused or use chunking strategies.

Not Iterating

The first prompt often needs refinement. Refine based on outputs:

Attempt 1: Too verbose → Add "Be concise"
Attempt 2: Missing format → Add "Return as JSON"
Attempt 3: Success

Expecting Consistency

LLMs are probabilistic. The same prompt can give different outputs. For production, set low temperature and validate outputs programmatically.


Prompt Templates

For Code Review

Review this [language] code for:
1. Bugs and logical errors
2. Security vulnerabilities
3. Performance issues
4. Readability improvements

Code:
```[code]```

Return findings as a numbered list with severity (high/medium/low).

For Data Extraction

Extract the following from the text:
- Person names
- Company names
- Dates
- Dollar amounts

Return as JSON. Use null for missing values.

Text: "{text}"

For Explanation

Explain [concept] as if I'm a [audience].
- Use at least one analogy
- Avoid jargon unless you define it
- Keep it under [N] sentences

FAQ

Q: Is prompt engineering a temporary skill?

While models are improving at understanding user intent, careful prompting will likely remain valuable. Even strong models often perform better with clear, structured inputs.

Q: How do I test prompts?

Create a test set of inputs with expected outputs. Run your prompt against all inputs and measure success rate. Iterate on failures.

Q: Should I use system prompts or user prompts?

System prompts set persistent context and behavior. User prompts contain the actual request. Use system prompts for role and rules, user prompts for the specific task.

Q: Can I prompt for structured output reliably?

Yes, with clear format instructions and examples. For critical applications, use JSON mode (if available) or validate and retry on malformed outputs.

Q: What is prompt injection?

When user input tricks the model into ignoring your instructions. Mitigate by separating instructions from user content, input validation, and output filtering.

Q: How long should prompts be?

As short as possible while being complete. Extra words waste tokens and can confuse the model. But don't sacrifice clarity for brevity.


Summary

Prompt engineering is the skill of communicating effectively with language models. It's not about tricks - it's about clarity, structure, and understanding how models process text.

Key Points:

  • Be specific about what you want
  • Provide examples for complex patterns
  • Use chain of thought for reasoning tasks
  • Assign roles to guide perspective
  • Specify output format explicitly
  • Iterate and refine based on outputs
  • Keep prompts focused, not overloaded

The difference between a mediocre and excellent LLM application often comes down to prompt quality. It's one of the highest-leverage skills in the AI age.

Related Concepts

Leave a Comment

Comments (0)

Be the first to comment on this concept.

Comments are approved automatically.