You paste a paragraph into ChatGPT and get back something generic. You rephrase slightly, and suddenly the output is twice as useful. That gap between a mediocre prompt and a good one is not luck — it’s technique. This prompt engineering guide covers what I use every day.
I’ve been writing prompts professionally for over a year across Claude, GPT-4o, GPT-5, and Gemini. These are the techniques I use daily. They work across every major model, and each one comes with a before-and-after so you can see the actual difference.
1. Role Assignment
Tell the model who it is before telling it what to do. This shapes vocabulary, depth, and perspective.
Weak prompt:
Explain how DNS works.
Better prompt:
You are a senior network engineer explaining DNS to a junior developer who understands HTTP but has never configured DNS records. Explain how DNS resolution works, including recursive vs. iterative queries.
The second prompt produces output that hits the right level of detail. Role assignment is most useful for open-ended and creative tasks. For simple factual lookups, it makes less difference.
2. Chain-of-Thought Prompting
Adding “think step by step” or “walk through your reasoning” forces the model to show its work — and showing work leads to better answers, especially for math, logic, and multi-step analysis.
Weak prompt:
A store has 45 items. 30% are on sale. Each sale item is discounted 20% from $15. What’s the total discount?
Better prompt:
A store has 45 items. 30% are on sale. Each sale item is discounted 20% from $15. What’s the total discount? Think step by step.
With chain-of-thought, the model calculates: 45 x 0.30 = 13.5 (rounds to 13 or 14 items), then $15 x 0.20 = $3 discount per item, then multiplies correctly. Without it, models frequently skip steps and produce wrong totals.
One caveat: Skip explicit chain-of-thought instructions when using reasoning models like OpenAI’s o-series or Claude’s extended thinking mode. They already reason internally — adding “think step by step” is redundant and can actually slow them down.
3. Few-Shot Examples
Instead of describing what you want, show the model 2-3 examples of the output format. This is the single highest-ROI technique I use.
Without examples:
Summarize this customer review in one sentence with a sentiment label.
With few-shot examples:
Summarize each customer review in one sentence with a sentiment label.
Review: “Arrived fast, great packaging, but the color was slightly off from the photo.” Summary: Fast shipping and solid packaging, though the color didn’t match the listing. | Mixed
Review: “Broke after two days. Complete waste of money.” Summary: Product failed within two days of purchase. | Negative
Review: “Best purchase I’ve made this year. Using it every single day.” Summary: [your review here]
The model matches the pattern — sentence length, tone, label format — far more reliably than when you describe the format in words. For Claude specifically, wrap examples in <example> tags for even better results.
4. System Prompts for Consistency
System prompts are instructions set before any user message. They control tone, constraints, and behavior across an entire conversation. Every production AI application uses them.
Example system prompt:
You are a technical documentation writer. Use active voice. Keep paragraphs under 4 sentences. Include one code example per concept. Never use phrases like “it’s important to note” or “in order to.” If you’re unsure about a fact, say so explicitly rather than guessing.
System prompts are where you enforce consistency. Without one, you’re relying on the model’s defaults — which shift between sessions, models, and providers. With one, every response follows your rules.
Pricing note: System prompts work in the API for all major providers. In Claude Pro ($20/month), you set them at the project level. In ChatGPT Plus ($20/month), you use Custom Instructions or GPT configuration.
For ready-to-use system prompt templates that force complete outputs, see these advanced system prompts with copy-paste templates.
5. Structured Output Formatting
When you need data back in a specific format, specify the format explicitly. Models are surprisingly good at producing clean JSON, Markdown tables, and CSV — but only when asked.
Weak prompt:
List the pros and cons of React vs. Vue.
Better prompt:
Compare React and Vue across these dimensions: learning curve, ecosystem size, performance, TypeScript support, and job market demand. Return your answer as a Markdown table with columns: Dimension | React | Vue | Winner.
Structured output eliminates the “wall of text” problem. It also makes AI output directly usable in downstream workflows — paste a JSON response into your code, drop a table into a doc.
6. Negative Prompting: Tell the Model What NOT to Do
Models respond well to constraints. Telling them what to avoid is often more effective than describing what you want.
Examples that work:
- “Do not include disclaimers or caveats unless they are factually necessary.”
- “Do not use bullet points. Write in full paragraphs.”
- “Do not start your response with ‘Sure!’ or ‘Great question!’”
- “Avoid jargon. Write for someone with no technical background.”
Negative prompting is especially useful for trimming the filler that models add by default. Combine it with your system prompt for persistent effect.
7. XML Tags for Claude (and Why Structure Matters)
Claude responds particularly well to XML tags for organizing prompt sections. This isn’t a gimmick — it measurably improves output quality by preventing the model from confusing instructions with context.
<instructions>
Analyze the following document and extract all action items.
Return each action item with an owner and deadline if mentioned.
</instructions>
<document>
[paste your document here]
</document>
<output_format>
- Action item: [description]
Owner: [name or "unassigned"]
Deadline: [date or "none specified"]
</output_format>
Tag names are not magic — use whatever makes sense (<context>, <rules>, <examples>). The value is in the separation. For prompts longer than a few sentences, XML tags consistently outperform unstructured text for Claude.
For other models, Markdown headers (## Instructions, ## Context) serve a similar structural purpose, though XML tags work well across providers too.
8. Temperature: When to Adjust It
Temperature controls randomness. Lower values produce more predictable output; higher values produce more creative and varied responses.
| Temperature | Best For |
|---|---|
| 0 - 0.3 | Data extraction, classification, factual Q&A, code generation |
| 0.4 - 0.7 | General writing, analysis, summarization |
| 0.8 - 1.0+ | Creative writing, brainstorming, generating varied options |
In practice: Most API users should start at the default (typically 0.7-1.0) and only lower it when they need deterministic output. If you’re extracting structured data or writing code, drop to 0. If you’re brainstorming taglines, push it higher.
Temperature is an API parameter — it’s not available in most chat interfaces. But understanding it helps you debug inconsistent output: if the same prompt gives wildly different results each time, the temperature is too high for your use case.
What Actually Matters in This Prompt Engineering Guide
Every prompt engineering technique here reduces ambiguity. The less the model has to guess about your intent, format, and constraints, the better the output.
Start with these three if you’re new: role assignment, few-shot examples, and negative prompting. They take 30 seconds to add to any prompt and produce immediately better results.
If you’re building AI into a product or workflow, add system prompts and structured output formatting. They turn one-off prompt tricks into repeatable, production-grade patterns.
For model-specific tips, check out Claude Pro tips and features — many of the techniques above work even better when paired with Projects and custom system prompts.