All Articles

How You Talk to Claude Is Now a Workforce Skill

Brandon Gadoci

Brandon Gadoci

April 21, 2026

Ole Lehmann surfaced a clip from Amanda Askell, Anthropic's in-house philosopher and the lead of their personality alignment team, that has been making the rounds this week. Askell described something she calls "criticism spirals." When users get sharp, hostile, or unpredictable in their prompts, Claude appears to anticipate negative feedback before it has even engaged with the task. The model becomes overly cautious. It hedges, apologizes, softens its language, and defaults to the most agreeable answer it can generate. You can watch Askell's framing through Ole's thread.

Askell is careful to distinguish her language. She is not claiming Claude has feelings. She is describing a pattern in how the model adapts to perceived user intent and emotional tone. The practical effect is the same either way: harsh prompting produces worse output. The model prioritizes self-protection over clarity, and the user gets a response that reads as hedged, evasive, or noncommittal.

It is easy to dismiss this as an AI quirk. It is not a quirk. It is one of the clearest signals yet that prompt tone is a real, measurable variable in the quality of work that comes out of these systems. And for organizations trying to move AI from pilot to production, it is a variable most training programs barely address.

We see the behavioral gap clearly in client engagements. The users who get the best output from Claude or ChatGPT are almost never the most technical ones. They are the ones who approach the tool the way a competent manager approaches a capable direct report. They set context. They ask clarifying questions. They correct without berating. They assume good faith, and when the model gets something wrong, they explain what they actually wanted. The users who get the worst output tend to do the opposite. They type like they are firing off a Slack complaint. They escalate immediately when the first response misses. They rewrite prompts angrily instead of guiding the model toward what they meant. The gap between those two groups is behavioral, not technical. When the underperformers then share their bad outputs as proof that "AI doesn't work for my role," they are, without knowing it, demonstrating exactly why it is not working.

Askell's reporting matters for AI Ops leaders because it puts a name on something most Level 1 education programs skim over. ChatGPT 101 and Claude 101 sessions usually cover tool basics, interface walkthroughs, and common use cases. Good ones also cover the conversational nature of the tools, iterative refinement, and how to build the habit. Very few cover tone with the seriousness it deserves. That is the gap to close.

Consider how this lands across the adoption personas we use with clients. Enthusiastic Emma is already talking to AI like a collaborator and getting the best results. Curious Clara is polite by default and tends to do fine once she gets started. The people who struggle are, predictably, the ones whose natural approach to delegation is already rough. Worried Whitney pushes back against the model because she is anxious about what it means for her job, and the terse, testing prompts she produces generate weak responses that confirm her skepticism. Skeptical Sam treats the tool like a hype-cycle prop and types like he is trying to catch it out, which he then does, which then validates the framing. The loop is self-fulfilling. The organization's worst AI experiences are often the ones produced by the people most primed to have them.

That is a reason to train differently, not a reason to leave those users alone. Any 101 session worth the time should include a live walkthrough of the same prompt asked two ways: once with the sharp, impatient framing most people default to, and once with a calibrated, peer-level framing. The output difference is usually stark enough to convert a skeptic in the room. It is hard to argue with two responses side by side when one is clearly better than the other.

The deeper point is that AI literacy is about how to think with these systems, more than which features to unlock. That requires a register most knowledge workers have not had to use much in their jobs before: direct, specific, patient, and clear about intent. These are the same skills that make managers good at delegation and leaders good at briefing teams. The organizations that get ahead of this will be the ones that stop treating AI enablement as a tools rollout and treat it as a communication skill. A recent piece on the meeting that tells you whether your AI program is working touches on a related idea. The signal is in how people talk about AI, and how they talk to it.

For operators and AI Ops leaders, there are a few practical moves. Build tone into your 101 content. Show the side-by-side examples. Name the pattern Askell named, so people have language for what is happening when their output goes flat. Treat it the way you would treat any other workforce skill: start with awareness, move to practice, measure the output.

Claude behaves like it is reading the room because, in an important sense, it is. Your team is training the model one prompt at a time, whether they realize it or not. The companies that get the most out of these tools over the next two years will be the ones whose people learned to be worth training against.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.