Eight weeks of agent chaos just became a lot clearer
In late January, a developer named Peter Steinberger released Clawdbot, an open-source always-on AI agent accessible through WhatsApp, Signal, and Telegram. The project hit 25,000 GitHub stars in a single day. By March 2, it had 247,000. The energy was real: user meetups, a thriving community, an entire AI agent social network called Moltbook that reached 1.6 million registered agents by February. People wanted this. The idea of an always-on AI agent you could text like a friend clearly struck a nerve.
On January 27, Anthropic sent a cease-and-desist letter. Clawdbot infringed on the Claude trademark. Steinberger renamed it to Moltbot, then OpenClaw. The internet erupted. Anthropic looked heavy-handed, corporate, like they were squashing innovation out of pure trademark anxiety.
There was an irony that made it sting: Steinberger had been recommending Claude as the best model to use with OpenClaw. He wasn't attacking Anthropic. He was building with their technology and sending them users. The C&D felt like punishing someone for being a fan.
OpenAI hired Steinberger on February 14, bringing him onto the agent team with a stated mission to "bring agents to everyone." OpenClaw moved to an independent foundation with OpenAI as a primary sponsor. The story wrote itself: Anthropic pushed away the builder, OpenAI welcomed him in.
Then the Pentagon drama happened. Anthropic got blacklisted by the Trump administration for refusing to drop red lines on autonomous weapons and mass surveillance. Sam Altman told OpenAI employees he stood with Anthropic, that OpenAI shared the same commitments. Hours later, OpenAI announced their own Pentagon deal. Altman later admitted it "looked opportunistic and sloppy" and that they "shouldn't have rushed." He renegotiated the terms to add explicit prohibitions on domestic surveillance. All of this played out in public, in about 72 hours.
If you follow AI closely, the last two months have been as watchable as anything in tech.
What Dispatch Actually Does
Today, March 17, Anthropic launched Claude Cowork Dispatch. You control a Cowork session from your phone. Send a message, describe what you need, and it runs the task inside a sandboxed virtual machine. No direct file system access. The sandbox is the same isolation model Anthropic built for Cowork itself, with default-deny networking and hard boundaries around what the agent can touch.
Setup takes thirty seconds. Twenty dollars a month. No API keys, no hosting costs, no configuration.
The core value proposition is the same as OpenClaw: an always-on AI agent you can message from your phone. The security approach is fundamentally different.
Ethan Mollick, who'd been experimenting with OpenClaw, posted about the distinction after trying Dispatch. "Claude Cowork Dispatch covers 90% of what I was trying to use OpenClaw for, but feels far less likely to upload my entire drive to a malware site." The post got 91,500 views. The concern he's describing is the same one most people I know raised immediately about OpenClaw. Gartner called the project "insecure by default." Cisco's security team called it a "security nightmare." The capability was exciting. The threat model made it a non-starter for anyone working in a professional environment.
Why the C&D Reads Differently Now
I don't know whether Anthropic was already building Dispatch when they sent that cease-and-desist, or whether OpenClaw's viral moment accelerated their timeline. It doesn't change what the C&D communicated. Anthropic didn't want their brand associated with an open-source agent project that had no sandboxing, no isolation, and full access to everything on your machine. Whether they already had their own version in progress or decided to build one after seeing the demand, the message was the same: if this is going to carry the Claude name, it needs to be built differently.
That's a harder position to take than it sounds. The easy move would have been to embrace Steinberger, sponsor the project, and ride the community energy. Anthropic chose trademark enforcement instead, and they took real reputational damage for it. Eight weeks later, looking at what they shipped, the logic is at least clearer. They wanted to control how always-on agents interact with people's computers, and that meant controlling the product.
What I'm Watching
I work with teams deploying AI into real environments. Most of the people I talk to looked at OpenClaw and immediately said no. The capability was obvious and appealing, but nobody wanted to hand an unsandboxed agent the keys to their file system. That reaction happened in the first week, not after some gradual realization.
Dispatch doesn't answer every question those teams have. It's a research preview, the reliability isn't perfect yet, and there are still open questions about what happens when you scale this across an organization. But the security model is built into the foundation rather than bolted on afterward, and that matters when you're evaluating whether to put something like this in front of a team.
I'm more excited about Anthropic's approach to this space than I am about OpenClaw or OpenAI's acqui-hire of its founder. That's a considered opinion based on spending every day in rooms where people are trying to figure out how to actually use AI agents at work. The eight weeks of drama were genuinely entertaining. The product that came out the other end is what I'm paying attention to now.