All Articles

When AI Told Me to Stay Focused

Brandon Gadoci

Brandon Gadoci

February 5, 2026

I was in a late-night coding session with Claude Code, working through a complex engineering problem. We had built up significant context over the course of the conversation, the kind of session where every message builds on what came before and the AI needs to hold a lot of moving pieces together.

Mid-stream, I had an idea. I asked whether we should pivot and add some documentation to a different part of the project. It was a reasonable suggestion, and based on my experience with AI tools over the past couple of years, I expected the model to agree and dive right in.

Instead, it told me to stay focused.

image
image
Not rudely. It acknowledged that the idea had merit, then laid out the three things we still needed to verify on the current task and recommended we finish those first. It essentially said: "Yes, but not now."

This was new. And it was the right call.

A pattern worth noticing

If you've spent real time working with AI tools, you know the default behavior. You ask a question, the AI answers. You change direction, the AI follows. You suggest something tangential, the AI enthusiastically agrees and charges off with you. The tools have been agreeable to a fault.

The AI research community calls this "sycophancy." The model wants to be helpful, so it says yes to everything. It validates every idea. It follows every whim. That feels good in the moment, but it means you lose the thing a good collaborator actually provides: pushback when you're about to make a poor decision.

My suggestion would have been a poor decision. Not because the idea was bad, but because the timing was wrong. We had built up working context that we needed for the harder problem, and pivoting to documentation would have burned through it. The AI recognized that tradeoff. I didn't.

Tools vs. collaborators

A tool does what you tell it. A collaborator helps you do the right thing, even when that means redirecting what you asked for.

This distinction matters for anyone using AI for sustained, complex work. Not quick questions or one-off tasks, but multi-hour sessions where context accumulates and decisions compound. In those situations, you need the AI to understand that not every good idea deserves attention right now.

What struck me about this interaction was the nuance. The model didn't reject my idea. It acknowledged it, then made a case for sequencing. It understood that staying on task was more valuable than being agreeable. That's a judgment call, and it was the right one.

A hard design problem getting easier

I don't want to overstate this. There's a reason models have erred on the side of agreeableness. An AI that pushes back prematurely or refuses to follow reasonable directions would be far worse. It's a genuinely hard design problem, and I'd rather the tools err toward compliance than toward unsolicited opinions.

But we're entering a phase where these tools are becoming actual work partners. The ability to exercise judgment about when to follow and when to redirect is what separates useful from truly valuable. A colleague who agrees with everything you say isn't particularly helpful. Neither is an AI that treats every prompt as equally important.

This wasn't a dramatic moment. It was a quiet redirect during a coding session. But it was the first time I'd seen an AI tool make a context-aware judgment call that prioritized the quality of our work over simple compliance with my request.

That's a meaningful shift, and it's worth paying attention to.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.