One follow-up prompt that makes Claude’s answers way better

One sentence, huge difference

Most people treat AI conversations like a vending machine. Put in a question, get an answer, move on. But what if the real value comes from what happens after that first response?

I keep running into prompting techniques that sound almost too simple to work, and then they do. This Reddit contributor shared a pattern on r/PromptEngineering that flips the usual one-shot workflow into something much more powerful. The idea is straightforward: instead of accepting Claude’s first answer, you ask it to critique and improve its own reasoning.

In partnership with

Speed Doesn’t Replace Strategy.

AI can surface the numbers in seconds, but numbers alone don’t create clarity.

In fact, many leaders have more financial data than ever yet less clarity about what to do next.

The real challenge isn’t reporting. It’s interpretation. Context. Judgment.

BELAY created the free guide The Future of Financial Leadership to explore why automation is a tool — not a replacement — for experienced financial oversight.

Inside, you’ll learn how the right human support brings structure to your numbers, confidence to your decisions, and focus to your growth strategy.

At BELAY, our U.S.-based Financial Experts help leaders move beyond dashboards and into decisive action.

Because insight doesn’t drive a business forward. Leadership does.

Quick Start

What you’ll learn: A simple iterative prompting pattern that consistently produces stronger, more accurate AI responses.

What you need: Access to Claude (or any capable LLM). No tools, no plugins, no setup.

Time to implement: About 30 seconds per interaction.

The Old Way vs. The New Way

Here’s how most people prompt:

Question → Answer → Done

You ask something, you get a response, you move on. The problem? That first answer often contains assumptions, gaps, or surface-level reasoning that you never catch.

The approach this Redditor recommends looks like this:

Question → Answer → Critique → Refinement

One extra step. That’s it. And the results are, honestly, noticeably better.

Step-by-Step: The Self-Critique Loop

Step 1: Ask your question normally. Just prompt Claude with whatever you need. Don’t overthink the initial query. This first response gives the model a foundation to work from.

Step 2: Follow up with the critique prompt. After you get the first answer, paste this exact follow-up:

Identify the weakest assumptions in your previous answer and improve them.

Credit to ReidT205

Why this works: Claude is particularly strong at self-evaluation. When you explicitly ask it to find holes in its own logic, it activates a different kind of reasoning. The original poster notes that “the second answer is often significantly stronger” because the model is now stress-testing its own output instead of generating from scratch.

Step 3: Review the refined answer. The second response should address gaps, strengthen weak points, and provide more nuanced reasoning. You can repeat the loop if needed, but one round of critique usually delivers the biggest improvement.

Turn AI Into Your Income Stream

The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.

*Ad

Community Tips That Make This Even Better

The discussion thread added a few tricks worth noting:

  • Ask for re-summarization. One commenter mentioned that asking Claude to “summarize again” forces it to distill and sharpen its points. You can also feed in new information and ask it to “look at things from another angle.”

  • Add context between rounds. Instead of just asking for critique, you can inject additional details or constraints. This gives the model new material to work with during refinement.

  • Try the “fresh eyes” technique. One savvy commenter shared an advanced version of this loop: take the first answer, paste it into a brand new session as a document, and ask Claude to critique it without knowing it wrote the original. The workflow looks like this: question → answer → save as document → open new session → critique document → refinement. By keeping the origin vague, you reduce the model’s tendency to be gentle with its own previous work.

  • Works across models. Multiple commenters confirmed this pattern produces strong results on Gemini and Copilot too. It’s not Claude-specific, though the original poster suggests Claude handles self-critique loops especially well.

Why This Pattern Works

The core insight here is that LLMs aren’t just answer machines. They’re reasoning engines. When you give a model permission to revisit and challenge its own output, you’re essentially getting two different “modes” of thinking in one conversation. The first pass generates. The second pass evaluates. Together, they produce something closer to how a careful human expert would work: draft, review, improve.

The best part is how low-effort this is. You don’t need elaborate prompt engineering frameworks. You don’t need system prompts or custom instructions. One sentence after the first response changes the quality of what you get back.

Practical Next Steps

  1. Try the critique prompt on your very next Claude conversation. Pick something you’ve already asked about and run the loop.

  2. Experiment with variations: “What did you oversimplify?” or “Where might an expert disagree with you?” can surface different kinds of weaknesses.

  3. For high-stakes work (strategy docs, technical decisions, research), make the critique loop a default habit. Two rounds minimum.

  4. Test the “fresh session” technique for anything where you need genuinely unbiased evaluation of a draft.

Want to see the full discussion, including more community variations on this technique? Check out the original thread by u/ReidT205 on r/PromptEngineering.

10 Free Excel Templates for Marketers

Marketers waste 5+ hours a week on spreadsheets they built from scratch. Stop reinventing the wheel with these free Excel templates built for marketing teams. Inside you'll get:

  • Pre-built templates for campaign tracking, budget planning, and reporting

  • Formulas already set up so you just plug in your numbers

  • Frameworks used by top marketing teams to stay organized and move faster

  • Everything you need to go from messy data to clear decisions

Download your free marketing Excel templates today.

*Ad