Stolen from Anthropic

Why your rules?

Anthropic's own system prompt does something nobody copies

Someone dug through the leaked Claude system prompt last week and spotted a pattern hiding in plain sight. Every rule Anthropic writes for the model comes with a reason attached. Sometimes inline, sometimes wrapped in a literal <rationale> XML tag.

That tiny habit is doing more work than it looks like. And almost nobody applies it to their own prompts.

In partnership with

The best prompt engineers aren't typing. They're talking.

Power users figured this out early: speaking a prompt gives you 10x more context in half the time. You include the edge cases, the examples, the tone you want — because talking is fast enough that you don't skip them.

Wispr Flow captures everything you say and turns it into clean, structured text for any AI tool. Speak messy. Get polished input. Paste into ChatGPT, Claude, Cursor, or wherever you work.

89% of messages sent with zero edits. 4x faster than typing. Works system-wide on Mac, Windows, and iPhone.

Here's a real line straight from Claude's system prompt:

Claude never uses bullet points when it decides not to help, the additional care and attention can help soften the blow.

Read that twice. It's not just a rule, it's a rule with intent baked in. The model knows what it's actually after, so it can apply the same judgment to a situation the rule never explicitly covered.

Rule alone vs rule plus rationale

Rule alone, the model follows it literally. Wins on the cases you anticipated, loses on every edge case you didn't.

Rule plus rationale, the model understands the goal. It generalizes. It adapts when something weird shows up.

That's the difference between a rigid checklist and actual judgment. One ships brittle output. The other ships output that holds up under pressure.

If you can't explain why a rule is in your prompt, the model can't explain it either. So it just guesses, and you eat the result.

The 30-second upgrade

You don't need to rewrite anything. Pick one prompt you use regularly and run this loop:

  1. Find one rule in it.

  2. Ask yourself: what problem is this rule actually solving?

  3. Add the answer as a "because" clause right after the rule.

  4. Throw an edge case at it that used to trip the model up.

That's the whole exercise. The output difference is immediate, and it compounds the more rules you do this for.

For longer system prompts, wrap the reason in a <rationale> tag. Keeps things organized and signals to the model that this part is the why, not part of the what.

If you can't explain why a rule exists after thinking for thirty seconds, cut it. It wasn't earning its slot.

How Jennifer Aniston’s LolaVie brand grew sales 40% with CTV ads

The DTC beauty category is crowded. To break through, Jennifer Aniston’s brand LolaVie, worked with Roku Ads Manager to easily set up, test, and optimize CTV ad creatives. The campaign helped drive a big lift in sales and customer growth, helping LolaVie break through in the crowded beauty category.

*Ad

Where rationale matters most

Not every rule needs it. "Output JSON" doesn't need a reason. The model already knows what to do with that.

Where it matters: tone, format, anything context-dependent. The fuzzy stuff. "Keep responses warm but not flowery." "Use bullets sparingly." "Don't apologize." Each of those breaks the moment the model interprets it too literally and has no way to flex.

Pair those rules with intent and they stop breaking. The model learns to read the room instead of reciting the policy.

The wider lesson

This is the whole game in one move. Stop telling the model what to do, start telling it what you're trying to accomplish. The model isn't dumb, it's just literal when you give it nothing else to work with.

Every rule in your prompt is a tiny contract between you and the model. Add the reason and the contract becomes enforceable. Skip it and the model starts improvising at the worst possible moment.

The folks shipping the best AI products in 2026 aren't writing more rules. They're writing fewer rules with sharper intent behind each one.

My honest take

I added rationale to the three prompts I use most that same afternoon. The change in output quality wasn't subtle. Edge cases that used to derail the model started getting handled cleanly, without me bolting on more guardrails.

The wild part: the prompts got shorter, not longer. When you write the why, you realize half your existing rules were workarounds for missing context. They evaporate.

Open your most-used prompt today. Add a "because" clause to every rule. Run something weird through it. The first time you feel the difference, you stop writing rules-only prompts forever.

Are you tracking agent views on your docs?

AI agents already outnumber human visitors to your docs — now you can track them.

*Ad