Back to blog
ChatGPTDec 27, 20257 min read

How I Learned to Write Decent Prompts for ChatGPT

Practical tips and techniques for crafting effective ChatGPT prompts, based on GPT-5.2 documentation and real experience.

How I Learned to Write Decent Prompts for ChatGPT

You know what annoyed me at first with ChatGPT? I'd write a request, and it would spit out some generic fluff. Or the opposite - go off on tangents I never asked about. Turns out the problem was me. Specifically, how I was phrasing my requests.

I dug through tons of material, including OpenAI's official documentation for GPT-5.2, and here's what I realized: the difference between a good and bad prompt can be up to 40% in response quality. That's not my opinion - that's from actual research in 2024-2025.

Let me share what actually works.


Four Things That Actually Change Results

Be Specific or Get Garbage

Look. I write "write something about dogs" - I get a generic wall of text about nothing. I write "write a 500-word article about dog breeds suitable for apartments, including pros and cons of each, for first-time dog owners" - I get something I can actually use.

The more precise you are about the task, the less the model has to guess. And it often guesses wrong.

The Role Thing Actually Works

When you write "You are an experienced marketer with 10 years in B2B" - it's not some magic trick. The model just starts responding at a different level. Fewer generic statements, more specifics. Try it - the difference is noticeable.

Tell It What You Want to Get

Want a list - say so. Want a table - say so. JSON - say so. Sounds obvious, but I didn't do this at first and kept wondering why responses were all over the place.

Examples Beat Explanations

Few-shot prompting is when you give the model a couple of examples of what you want. Works great. Especially for non-standard formats or specific styles.


The Structure I Actually Use Now

OpenAI recommends a combination of Markdown and XML tags. Sounds complicated, but in practice it looks like this:

# Role and Objective
You are a [who] specializing in [what].

# Context
[Background info the model needs to know]

# Task
[What actually needs to be done]

# Output Format
[How you want the response]

Here's a real example I use:

# Role and Objective
You are an experienced content marketer specializing in B2B SaaS products.

# Context
Our company sells project management software for remote teams.
Target audience: CTOs and team leads at companies with 50-200 employees.

# Task
Write a LinkedIn post announcing our new AI-powered task prioritization feature.

# Output Format
- Length: 150-200 words
- Tone: Professional but approachable
- Include: 1 hook question, 3 key benefits, call-to-action
- End with 3 relevant hashtags

GPT-5.2 Quirks Worth Knowing

The newest version has some peculiarities that took me a while to figure out.

It's More Concise by Default

GPT-5.2 gives shorter answers than previous versions. If you need detailed responses, say so explicitly. I now add something like:

<output_verbosity_spec>
- Default: 3-6 sentences for typical answers
- For complex tasks: overview paragraph + up to 5 bullet points
</output_verbosity_spec>

It Loves Adding Stuff You Didn't Ask For

The model sometimes adds extra features, suggestions, caveats. To prevent this:

<design_and_scope_constraints>
- Implement EXACTLY and ONLY what the user requests
- No extra features, no added components
- If instruction is ambiguous, choose the simplest interpretation
</design_and_scope_constraints>

Long Documents Need Special Handling

For texts over 10,000 tokens, I add:

<long_context_handling>
- First, produce a short internal outline of key sections
- Re-state user's constraints before answering
- Anchor claims to sections ("In the 'Data Retention' section...")
- Quote or paraphrase specific details (dates, numbers)
</long_context_handling>

Mistakes I Made (So You Don't Have To)

Vague requests - "write something interesting" gives you nothing useful. Always add context about who it's for and why.

No context - the model doesn't know your audience, your goals, your constraints. Tell it.

Too many tasks at once - quality drops for each part. Better to split into separate requests.

Not iterating - first response is rarely perfect. Refine it, ask for changes, give feedback.

Adding "think step by step" to O1/O3 - reasoning models have this built in. For them, actually simplify your prompt.


What I Do Now

  1. Start simple and add complexity gradually based on what I see
  2. Use Chain-of-Thought for complex tasks: "Think step by step before answering"
  3. For long documents: data at the top, question at the bottom
  4. Always set constraints: length, style, what to include, what to exclude
  5. For JSON output: use the response_format parameter with strict: true

Bottom Line

Writing good prompts is a skill. It develops with practice. The main rules:

  • Be specific and explicit
  • Structure complex prompts with Markdown and XML
  • Set role, context, and output format
  • Use examples for complex tasks
  • Iterate and refine

Follow OpenAI's recommendations and you'll get noticeably better responses.

Want to improve your prompts instantly?