What Makes Claude Different (And How to Write Prompts for It)
Master Claude's literal instruction-following, Extended Thinking, and XML-based prompting style.
-0030.png&w=3840&q=75)
Claude threw me off when I first switched from ChatGPT. I'd write prompts that worked fine in GPT, and Claude would give me... exactly what I asked for. Nothing more. At first I thought it was broken. Turns out, that's the point.
Claude from Anthropic works differently. Claude 4.5 Opus hit 82% on SWE-bench - best in the industry for coding tasks. But to get that kind of performance, you need to understand how it thinks.
Here's what I figured out.
The Core Difference
Claude does exactly what you ask. Literally. Not more, not less.
With ChatGPT, if you ask for a summary, it might add suggestions, caveats, related ideas. Claude won't. If you didn't ask for suggestions, you're not getting suggestions.
This sounds limiting, but it's actually powerful. You get predictable, controllable outputs. The catch? You have to be explicit about everything you want.
How Claude Is Built Different
It Takes Instructions Literally
Ask for a list of 5 items, you get exactly 5 items. Ask for analysis without recommendations - no recommendations. This isn't a bug, it's the design.
What this means in practice:
- If you want examples - ask for them
- If you want a detailed response - say so
- If you want initiative - explicitly say "Feel free to add relevant suggestions"
Extended Thinking Replaces Manual Chain-of-Thought
Claude has built-in reasoning. When you enable Extended Thinking, the model "thinks" internally before responding.
Important: if you're using Extended Thinking, don't add stuff like "think step by step." It's already doing that. Adding it manually can actually make things worse.
XML Tags Are the Gold Standard
Anthropic's official documentation explicitly recommends XML tags for structuring prompts. Not Markdown, not plain text - XML.
Prefilling Controls Format
This is unique to Claude. You can pre-fill the start of the response to force a specific format.
{"role": "assistant", "content": "{"}
This makes Claude continue in JSON format. You can do similar things with other formats.
The Structure That Works
Here's what Anthropic recommends:
<role>
You are a senior data analyst specializing in sales analytics.
</role>
<instructions>
Analyze the provided dataset.
Focus on:
1. Revenue trends
2. Anomaly detection
</instructions>
<documents>
<document index="1">
<source>sales_q4_2025.csv</source>
<document_content>{{DATA}}</document_content>
</document>
</documents>
<constraints>
- Use only provided data
- Cite specific evidence
- Be concise
</constraints>
<output_format>
1. Executive Summary (2-3 sentences)
2. Key Findings with evidence
3. Recommendations (prioritized)
</output_format>
<query>
What are the main drivers of revenue decline in EMEA region?
</query>
Key Principles That Changed My Results
Be Explicit About Everything
Claude won't guess what you meant. Tell it.
Bad:
Analyze the data
Good:
<instructions>
Analyze the sales data and provide:
1. Top 3 performing products by revenue
2. Month-over-month growth rate
3. Seasonal patterns if any
Include specific numbers and percentages.
</instructions>
Documents at Top, Question at Bottom
This one surprised me. Putting your documents at the start of the prompt and your question at the end improves quality by up to 30%. Not sure why, but it works.
<documents>
[Long document here]
</documents>
<query>
[Your question at the very end]
</query>
Drop the CoT Instructions with Extended Thinking
If Extended Thinking is on, remove all of these:
- "Think step by step"
- "Let's break this down"
- "First, analyze... then..."
The model already does this. Adding more just adds noise.
Use Prefilling for Format Control
For guaranteed JSON:
{"role": "assistant", "content": "{"}
For structured response:
{"role": "assistant", "content": "## Analysis\n\n"}
Patterns That Work Well
Document Summarization
<instructions>
Answer ONLY based on the provided document.
First quote relevant passages, then provide your answer.
If the information is not in the document, say "Not found in the provided document."
</instructions>
<document>{{CONTENT}}</document>
<output_format>
<quotes>Relevant quotes from the document</quotes>
<answer>Your answer based on the quotes above</answer>
</output_format>
Code Generation
<instructions>
Create a function based on the requirements below.
Include:
- Type hints for all parameters and return values
- Comprehensive docstrings
- Error handling for edge cases
- Unit tests with pytest
</instructions>
<requirements>
{{REQUIREMENTS}}
</requirements>
<examples>
<example>
<input>{{SAMPLE_INPUT}}</input>
<output>{{SAMPLE_OUTPUT}}</output>
</example>
</examples>
<constraints>
- Follow PEP 8 style guide
- Handle edge cases explicitly
- Log errors appropriately
</constraints>
Data Analysis
<role>
You are a senior business analyst with expertise in e-commerce metrics.
</role>
<instructions>
Analyze the provided sales data and identify:
1. Key performance trends
2. Anomalies or outliers
3. Actionable recommendations
</instructions>
<data>
{{CSV_DATA}}
</data>
<output_format>
Structure your response as:
1. Executive Summary (2-3 sentences)
2. Detailed Findings (with specific numbers)
3. Recommendations (prioritized by impact)
</output_format>
Quick Comparison
| Aspect | Claude 4.5 | ChatGPT | Gemini |
|---|---|---|---|
| Recommended format | XML (gold standard) | Markdown + XML | XML or Markdown |
| Reasoning | Extended Thinking | reasoning_effort param | Thinking levels |
| Chain-of-Thought | Built-in (don't add) | Needs explicit instruction | Built-in |
| Instruction following | Literal | Interpretive | Moderate |
| SWE-bench | 82% (best) | ~75% | ~70% |
Mistakes I Made
Adding CoT to Extended Thinking - redundant, can hurt performance
Vague instructions - Claude won't fill in the gaps for you
Documents at the end - put them at the top
Ignoring prefilling - powerful tool for format control
Using Markdown for complex tasks - XML structures better
What I Do Now
- Always use XML tags to separate prompt sections
- Be maximally specific - Claude does exactly what's written
- Use
<constraints>for restrictions and prohibitions - Put long data at the top, question at the bottom
- Test prefilling for format control
- For code, use
<examples>- few-shot works great
The Takeaway
Claude requires more explicit instructions than GPT, but that makes responses more predictable and controllable. XML tags are your main tool for structuring complex tasks.
Key points:
- XML tags are the gold standard
- Be explicit - the model doesn't guess
- Documents at top, query at bottom
- Extended Thinking replaces manual CoT
- Prefilling controls output format