Why Your ChatGPT Prompt Sucks in Claude (And Vice Versa)
Learn why prompts don't transfer well between AI models and how to adapt them for ChatGPT, Claude, and Gemini.
-0032.png&w=3840&q=75)
I used to think a good prompt is a good prompt. Write something that works in ChatGPT, paste it into Claude, get similar results. Wrong.
There's actual research on this. A study from November 2024 found that prompt format can change model performance by up to 40%. More importantly, prompts optimized for one model transfer poorly to others - less than 20% overlap in what works best.
So I had to learn how to write for each one separately. Here's what I figured out.
The Quick Comparison
| Aspect | ChatGPT (GPT-5.2) | Claude 4.5 | Gemini 3 |
|---|---|---|---|
| Best format | Markdown + XML | XML (gold standard) | XML or Markdown |
| Reasoning | reasoning_effort param | Extended Thinking | Thinking levels |
| Chain-of-Thought | Needs explicit instruction | Built-in (don't add!) | Built-in |
| Few-shot examples | Improves quality | Improves quality | Improves quality |
| How it follows instructions | Interprets and expands | Follows literally | Moderate |
| Long context | Compaction API | Documents at top | Standard |
| Multimodality | Good | Basic | Native, advanced |
| JSON output | Structured Outputs | Prefilling | Schema mode |
Why Different Models Need Different Approaches
ChatGPT Interprets and Expands
GPT models are trained to be "helpful" and often add context and details even when you didn't ask. This is great for general requests but can be annoying for precise tasks.
The fix: explicit constraints using <design_and_scope_constraints> to prevent "scope drift."
Claude Does Exactly What You Say
Claude takes instructions literally. No more, no less. It won't guess your intentions.
The fix: be maximally explicit about everything you want. If you want examples, ask. If you want a detailed response, say so.
Gemini Balances with Multimodality
Gemini is optimized for working with different content types and has a built-in thinking levels system.
The fix: set the right thinking level and media resolution for your task.
Same Task, Three Different Prompts
Let's say I need to analyze customer reviews and find the main pain points.
ChatGPT Version
# Role and Objective
You are a customer experience analyst specializing in e-commerce feedback analysis.
# Instructions
Analyze the customer reviews below and identify the top 5 pain points.
<design_and_scope_constraints>
- Focus ONLY on negative feedback
- Do not add recommendations unless asked
- Be concise: max 2 sentences per pain point
</design_and_scope_constraints>
# Data
{{REVIEWS}}
# Output Format
Numbered list with:
1. Pain point name
2. Frequency (how many reviews mention it)
3. Example quote
Claude Version
<role>
You are a customer experience analyst specializing in e-commerce feedback analysis.
</role>
<instructions>
Analyze the customer reviews and identify the top 5 pain points.
For each pain point provide:
1. Clear name/category
2. Frequency count
3. One representative quote
</instructions>
<constraints>
- Focus only on negative feedback
- Maximum 2 sentences per pain point
- Do not add recommendations
</constraints>
<documents>
<document index="1">
<source>customer_reviews.csv</source>
<document_content>{{REVIEWS}}</document_content>
</document>
</documents>
<output_format>
Numbered list, 5 items maximum.
</output_format>
Gemini Version
<role>
You are a customer experience analyst.
</role>
<constraints>
1. Focus only on negative feedback
2. Be objective and data-driven
3. Maximum 2 sentences per point
</constraints>
<context>
{{REVIEWS}}
</context>
<task>
Identify top 5 customer pain points. For each: name, frequency, example quote.
</task>
Notice the differences? ChatGPT needs explicit scope constraints. Claude needs more explicit instructions and a different tag structure. Gemini separates context from task.
Key Structural Differences
How to Set Role
| Model | How to do it |
|---|---|
| ChatGPT | # Role and Objective (Markdown heading) |
| Claude | <role>...</role> (XML tag) |
| Gemini | <role>...</role> or Markdown |
How to Set Constraints
| Model | How to do it |
|---|---|
| ChatGPT | <design_and_scope_constraints> to prevent drift |
| Claude | <constraints> - followed literally |
| Gemini | <constraints> - standard format |
How to Pass Data
| Model | How to do it |
|---|---|
| ChatGPT | # Data or # Context (Markdown) |
| Claude | <documents> with nested <document> tags |
| Gemini | <context> - model treats this as data, not instructions |
How to Set Output Format
| Model | How to do it |
|---|---|
| ChatGPT | # Output Format + Structured Outputs API |
| Claude | <output_format> + prefilling |
| Gemini | <output_format> or include in <task> |
When to Use Which
ChatGPT works best for:
- Creative tasks - it adds interesting details on its own
- Code with explanations - comments well
- Agentic scenarios - with persistence reminders
- JSON output - Structured Outputs guarantees format
Claude works best for:
- Precise instruction following - literal execution
- Document analysis - great with long context
- Programming - best SWE-bench result (82%)
- Tasks with clear requirements - predictable results
Gemini works best for:
- Multimodal tasks - native image, video, audio understanding
- Fact-checking tasks - Search Grounding
- Visual content analysis - deep image understanding
- PDF and document work - optimized processing
How to Migrate Prompts
From ChatGPT to Claude
- Replace Markdown headings (
#) with XML tags - Make instructions more explicit
- Remove CoT instructions ("think step by step")
- Add
<constraints>for restrictions - Move data to top, question to bottom
From ChatGPT to Gemini
- Wrap data in
<context>(protects against injection) - Choose appropriate thinking level
- For media, set media_resolution
- Leave temperature at 1.0
From Claude to ChatGPT
- Add
<design_and_scope_constraints>for control - Can add CoT: "Think step by step"
- Consider adding few-shot examples
- For JSON, use Structured Outputs API
Universal Template (Compromise)
If you need one prompt that works everywhere, use this:
<role>
You are a [specific role] with expertise in [domain].
</role>
<task>
[Clear, specific instruction]
</task>
<constraints>
- [Constraint 1]
- [Constraint 2]
- [Output format requirement]
</constraints>
<context>
[Relevant background information or data]
</context>
<output_format>
[Exact structure expected]
</output_format>
This works acceptably on all three platforms, though it's not optimal for any of them.
The Takeaway
Key points:
- Prompts don't transfer well between models
- ChatGPT interprets - needs explicit constraints
- Claude executes literally - needs explicit instructions
- Gemini balances - thinking levels and media settings matter
- XML works everywhere - but Markdown is preferred for GPT
Invest time in adapting prompts for each platform - it gives you up to 40% quality improvement.