A U.S. senator recently posted a video of himself talking to an AI chatbot about the dangers of AI. It went viral. The production was slick, the exchange was compelling, and millions of people watched a public figure ask an AI to confirm what we already fear about this technology.
It was also a conversation that could have happened in 2023.
That's not a knock on the senator. The video was well-produced and the questions were reasonable. But the interaction it showcased, a person sitting across from a chatbot and steering it toward a predetermined conclusion, represents an understanding of AI that is already outdated. The technology has moved. The public conversation hasn't.
The Mental Model Is Stuck
Most people still think of AI as a talking search engine. You ask it a question, it gives you an answer, and you decide whether to be impressed or alarmed. That mental model made sense three years ago. It doesn't anymore.
The companies building this technology aren't making conversation assistants. They're building the infrastructure for how knowledge work gets done. That distinction matters, and almost nobody in public discourse is talking about it.
What's Actually Shipping
Consider what's actually shipping right now. AI systems that can read a codebase, plan changes across dozens of files, execute those changes, and verify the results. Platforms where an AI agent doesn't just answer your question but works alongside you, managing files, coordinating across tools, and operating inside the same environment you do. The jump from "chatbot that sounds smart" to "system that does work" has already happened. Most people just haven't seen it yet.
This is the gap that should concern us. Not the theatrical version of AI risk that plays well on social media, but the quiet reality that the tools reshaping knowledge work are evolving faster than our collective ability to understand them. When the public mental model is three years behind the actual technology, we end up with policy conversations that are solving yesterday's problems.
Politicians Are Natural Prompters
And here's where it gets interesting with politicians specifically.
Politicians are, by training and instinct, exceptional prompters. They know how to frame a question to produce the answer they want. They understand how to structure a conversation so the conclusion feels inevitable. These are exactly the skills that make someone effective with AI tools. Ask a clear question with the right context and constraints, and you get a useful result.
That's a genuine advantage, and it means elected officials may be among the people who benefit most from this technology. But it's also a risk. The same skill that makes someone a great prompter makes them great at producing AI-generated content that supports whatever narrative they're already running. The tool doesn't care about the direction. It responds to the skill of the person using it.
This isn't a partisan observation. It applies equally across the political spectrum. AI is an amplifier. It makes skilled communicators more productive, regardless of what they're communicating or why.
The Real Story Is Quieter
The real story here isn't that a senator had a viral moment with a chatbot. It's that while we're still debating whether AI can hold a conversation, the technology has quietly moved on to something much bigger. The companies building these platforms are constructing the new operating system for knowledge work. It will take years for the broader world to fully grasp that shift.
In the meantime, we'll keep getting well-produced videos of people talking to chatbots. They'll keep going viral. And the actual transformation will keep happening in the background, where almost nobody is looking.