I have four Claude windows open right now. One is running Claude Code against a client project. Another is a Cowork session reorganizing a set of files. A third is connected to my Fireflies transcripts, Google Drive, and email through MCP, pulling context for a deliverable I need to send tomorrow. The fourth is this one, where I'm writing.
None of these windows know about each other. Each one is doing real work. And the only thing connecting them is me.
This would have been science fiction 18 months ago. Today it's an unremarkable evening. I'm not a developer by training. I'm a consultant who learned to use these tools by using them every day and paying attention to what they can and can't do. MCP connectors link Claude to Notion, Gmail, Slack, Google Drive, Fireflies, and dozens of other tools. Skills let me encode my expertise into reusable instructions that Claude loads automatically. Cowork gives me agentic file management without writing a line of code. Claude Code turns natural language into working software.
The execution layer isn't just fast. It's practically infinite, bounded mainly by your token budget and your willingness to spin up another window.
The Pace Behind All of This
Since late 2025, Anthropic has shipped new model families, Claude Code, Cowork, MCP Apps, Skills, Connectors, Claude in Chrome, and Claude in Excel. That's a pace of product launches that would be impressive spread across two years, let alone a few months. And there's a reason the velocity keeps compounding. Boris Cherny, the creator of Claude Code, recently confirmed that Claude Code is now 100% written by Claude Code. Mike Krieger, Anthropic's Chief Product Officer, put it plainly: Claude is writing Claude. When the tool that builds the next version of itself is the same tool shipping major products every few weeks, the improvement cycle compresses in ways that are hard to reason about linearly.
This is the environment we're operating in. The tools are changing week to week. The capabilities are expanding faster than most people can absorb them. And the execution layer, the part of work that used to consume most of our time, has been compressed to near zero for anyone willing to learn how to direct it.
Which raises the real question: if execution is essentially solved, what's the actual constraint?
It's You
I wrote about this a few months ago in "The New Bottleneck: Deep Thinking". The argument was that AI collapsed the execution layer so dramatically that the limiting factor on progress shifted upstream. It's no longer about how fast you can produce. It's about whether you can think clearly enough to direct what gets produced. That's still true. But after a few more months of living inside this environment daily, I want to sharpen the point. Because "thinking" is too broad. There are specific capabilities that separate the people getting extraordinary results from the people who tried AI and moved on.
Thinking Inside the Chaos
When I have four agent sessions running, the challenge isn't managing them mechanically. It's maintaining clarity about what each one is doing and why. Recently I was building a client proposal in one window, reviewing meeting transcripts in another for a different client, debugging an n8n automation workflow in a third, and drafting an article in a fourth. The moment I lost track of the proposal's strategic framing because I'd been deep in automation debugging for twenty minutes, the quality of my direction to Claude dropped noticeably. The output went vague. Not because the AI got worse, but because my thinking got muddy. The ability to hold multiple complex mental models simultaneously, to context switch without losing the thread, is the core skill. And it's a thinking skill, not a technical one.
Communicating with Precision
Every ambiguous instruction produces mediocre output that requires rework. I've learned this the hard way dozens of times. Early on, I would give Claude something like "write a proposal for this client." The result was generic and required three rounds of revision. Now I provide the client's industry context, the relationship history, the specific problems we discussed, the tone that fits this particular stakeholder, and the structure I want. One pass, usable output. The difference isn't that the AI got smarter between those two interactions. It's that I got better at articulating what I actually wanted. That's a communication skill that most people have never been pushed to develop to this degree. AI demands a level of clarity from the person directing it that most workflows never required before.
Understanding Tools and Their Limits
Knowing what Claude can do is table stakes. Knowing where it hallucinates, when it needs more context, when to break a task into smaller pieces, when its confidence exceeds its accuracy: that's what separates the operator from the tourist. I had a session recently where Claude confidently produced a competitive analysis with three statistics that looked right but weren't. If I hadn't checked, those numbers would have gone into a client deliverable. The tool is powerful. It's also wrong sometimes. Knowing when to trust and when to verify is a judgment call you have to make dozens of times a day, and it requires genuine understanding of how these systems work.
Adapting Constantly
The tool I'm using tonight is materially different from the tool I was using two months ago. Features ship weekly. Capabilities expand. Patterns that worked last month get replaced by better approaches. I've rebuilt my personal workflow at least three times since December. Not because it was broken, but because new capabilities made the old approach unnecessarily complicated. If you're not willing to tear down what's working and rebuild it when something better becomes possible, you fall behind. This isn't a one-time learning curve. It's a permanent posture.
What This Adds Up To
I'm one person running a consultancy, producing client deliverables, publishing content, managing automations, and building a SaaS product. Not because I work more hours than anyone else. Because the execution layer is handled and I've spent the past year developing the skills to direct it effectively. As I wrote in "The Great Compaction", this isn't the old version of multitasking where you do many things poorly. It's orchestration. Multiple complex workstreams running in parallel, with AI handling the production and you handling the judgment.
These skills are learnable. They're not reserved for developers or people with technical backgrounds. But they do require something that no tool can give you: the willingness to change how you work, not once, but continuously.
The ceiling is gone for the people who can think clearly in the middle of all this. For everyone else, the floor is rising.