People are confiding in Claude

Beyond the assistant

People keep bringing Claude things they can't bring anyone else

Anthropic just dropped research on how people actually use Claude, and the findings sit in a corner that productivity benchmarks never measure. Users aren't only running it for emails and code review. They're bringing relationship questions, career crossroads, mental health worries, and 2am life decisions.

That reframes what "AI assistant" even means in 2026. The token count is the same, but the conversation is closer to a friend who happens to be available than a search box.

I read the breakdown twice because the implications run deeper than the framing of "AI as therapist". Here's what surfaced and what it means for anyone building with Claude or just using it.

In partnership with

The best prompt engineers aren't typing. They're talking.

Power users figured this out early: speaking a prompt gives you 10x more context in half the time. You include the edge cases, the examples, the tone you want — because talking is fast enough that you don't skip them.

Wispr Flow captures everything you say and turns it into clean, structured text for any AI tool. Speak messy. Get polished input. Paste into ChatGPT, Claude, Cursor, or wherever you work.

89% of messages sent with zero edits. 4x faster than typing. Works system-wide on Mac, Windows, and iPhone.

What the research actually found

Anthropic looked at the patterns behind millions of these conversations using privacy-preserving aggregates. The categories show up consistently:

Relationships. How to handle a fight with a spouse. How to set boundaries with a parent who keeps overstepping. How to end a friendship without making it ugly.

Careers. Should I take this offer. How do I ask for a raise without sounding entitled. Am I in the wrong field entirely.

Emotional support. Working through grief, anxiety, loneliness, and burnout. Often at hours when nobody else is around.

Life decisions. Moving cities. Starting a business. Going back to school at 38.

Self-reflection. Making sense of recurring patterns. "Why do I keep doing this?"

The pattern Anthropic flags is the most interesting part. People use Claude as a thinking partner, not as an oracle. They want to talk something through, not get a verdict handed down.

Why this matters if you're building on Claude

The interaction model people actually want is closer to a coach than a Q&A bot. Products that hard-code Claude into transactional flows are missing the conversational, exploratory pattern users keep returning to.

If you're shipping AI features in 2026, this changes the design brief. The features that earn trust are the ones that let users stay in the conversation, not push them into a fixed workflow. Less "click here for the answer", more "let's keep talking through this until you see what you actually think".

Your meetings just got a lot more useful

Granola connects your meeting notes to tools like Claude and ChatGPT, ​​so your AI actually understands your work context. Update your CRM, extract tasks, or write follow-ups without copy-pasting a thing.

*Ad

Five practical uses for everyday work

You don't need to be in a personal crisis for this to matter. The same conversational pattern works on the messy professional and personal stuff you already navigate.

  1. Draft hard conversations before saying them out loud. Tough feedback for a teammate. The boundary you've been avoiding setting. Type the situation in. Ask Claude to draft what you'd say. Edit until it sounds like you.

  2. Map decisions by listing the trade-offs you're avoiding. Tell Claude the choice you're facing. Ask it to surface the trade-offs you might be flinching away from. Blind spots are usually where the real answer hides.

  3. Name what you're feeling when the words won't come. Describe the situation and the muddle. Ask Claude what emotions might be driving it. The right name unlocks the next move.

  4. Stress-test your reasoning. Lay out your conclusion. Ask Claude to push back hard. Ask it to argue the opposite case. If your reasoning still holds, ship the decision. If not, you just saved yourself from a bad call.

  5. Journal with feedback. Type out what's on your mind. Ask Claude to reflect the patterns back at you. Reflection is what most journaling never delivers.

These aren't edge cases. Anthropic's research says they're a meaningful share of how Claude actually gets used.

The harder questions nobody fully solves yet

Anthropic was careful in the framing. Claude isn't a therapist and the company won't position it as one. Personal guidance from a model raises real questions about safety, dependency, and the line between helpful reflection and quietly reinforcing the user's worst patterns.

The research notes some users turn to AI because human support is unavailable, expensive, or carries stigma. That's both an opening for the product and a responsibility for the people building it. The decisions about when Claude should suggest professional help, or when it should slow a conversation down, aren't hypothetical anymore. Users are already in those moments.

There's also a methodology caveat worth holding. Privacy-preserving aggregates describe patterns at scale, not deep insight into any one user. The findings are directional, not diagnostic.

What this signals for the rest of 2026

The companies that take personal guidance seriously, with the right guardrails, are the ones earning long-term trust. The "assistant" framing undersells what's already happening in the chat windows. People aren't asking for a smarter Google. They're asking for something to think with.

If you build AI products, design for that. If you just use Claude, lean into it. Type the messy thought. Ask for pushback. Treat it like the thinking partner the data already says it is.

My honest take

The research lands on something I keep noticing in my own usage. The chats I come back to aren't the ones where Claude wrote the email. They're the ones where I went in confused and came out clearer.

That's the underrated unlock. Not the output, the thinking. Most tools optimize for finishing tasks. Claude is quietly becoming useful for figuring out what task is actually worth finishing.

Open a fresh chat. Ask the question you've been avoiding. See what comes back.

Your first HR system, implemented right

Rolling out your first HR tool? Get a step-by-step guide to avoid common mistakes, drive adoption, and build a scalable HR foundation.

*Ad