I've been saying the same thing in front of audiences for a couple of years now. I first noticed it during my time at data.world, when I was running AI operations across the company and watching what separated the people getting real value from AI from the people who kept bouncing off it. The pattern hardened as I launched Gadoci Consulting and started working with many more clients across many more industries. I said it again this morning to a client room. I said it last fall in a different industry. I said it in November to another team. I've said it in every small session in between. It shows up at the end of every foundational AI session I deliver, in a single line: become a conductor of knowledge and context.
It isn't a slogan. It's the most accurate description I've found for what good AI work actually feels like once people stop fighting the tool. And the longer I've been working with this, the more I think it's also a description of where knowledge work is heading.
The Idea Has Outlasted Everything Around It
The ecosystems have churned in that time. Models have leapt forward. Tooling has come and gone. The brand of model people use changes constantly. Some clients live in Claude. Some live in ChatGPT. Some have standardized on Copilot. Others bounce between Gemini and Perplexity depending on the task. The harness around the model has gone from a plain chat box to persistent workspaces, reusable assistants, connectors, agents, and procedural skills. None of that has changed the underlying point. If anything, every new layer has made it more true.
That durability matters. It tells me this isn't a tip about how to phrase a prompt, or a trick that worked in one model era and stopped working in the next. It's something closer to a description of the work itself. People got better answers when they conducted in 2023, when context windows held a few thousand tokens. They get better answers conducting now, when those windows hold a million. They will get better answers conducting against the model that ships next year, whatever that model turns out to be.
The Habit We Have to Unlearn
Most people walk up to an AI tool carrying a habit they didn't know they had. Twenty-plus years of Google trained us to type half-sentences. "Capital of France." "Best time to visit Paris." "Paris culture." We were composing tiny keyword bursts, scanning ten blue links, and clicking back to start over. Transactional. One question, one answer, one bounce.
That habit doesn't translate. None of the modern AI tools are search engines wearing a costume. They're conversational systems that get better the more you feed them, refine them, and direct them. If you sat down with a travel agent, you wouldn't grunt three keywords at her. You would describe what you're trying to do, share what you already know, react to what she suggests, and circle back when something doesn't fit. That's the muscle people need to build with AI, and it's almost the opposite of the muscle Google built for them.
So the question I get asked, in some version, in every single session, is: what's the actual skill? If it isn't typing the perfect search query, and it isn't memorizing magic prompts, what is it?
The answer is conducting.
What Conducting Actually Means
A conductor doesn't play any instrument. She decides what gets played, when, by whom, at what volume, and how the parts hang together. She brings the score. She knows the room. She watches the musicians and reads the audience. The output sounds like one piece of music because someone in front of the orchestra was making sure the right inputs arrived at the right moment.
AI works the same way. The model is the orchestra. The notes it can produce are extraordinary, but they don't play themselves into anything useful. Someone has to decide which document to bring in first, what context the model needs to focus on, when to push back on a draft, when to branch into a new angle, when to compact down and start fresh. The skill isn't the prompt. The skill is the orchestration.
This is also the cleanest way I've found to explain why "prompt engineering" feels stale now. Three years ago, the artifact people obsessed over was the prompt itself. Long, structured, role-set, output-formatted, full of constraints and examples. That kind of prompt still works, but the modern models have absorbed most of what it was doing for you. They decide how hard to think. They decide whether to use the web. The clever phrasing matters less than people imagine, and the context you bring matters far more. The discipline that replaced it has a name now. Andrej Karpathy popularized the phrase in mid-2025 when he wrote that "context engineering is the delicate art and science of filling the context window with just the right information for the next step." Shopify's Tobi Lütke endorsed the shift soon after. LangChain, Anthropic, and LlamaIndex formalized it as a discipline. The 2026 State of Context Management Report has 82% of IT leaders saying prompt engineering alone is no longer enough. The terminology caught up to a way of working that was already happening in practice.
Conducting is what context engineering looks like in practice. The conversation is the live performance. Persistent workspaces are the rehearsal space. Reusable assistants are the ensemble cast. Standardized procedures are the sheet music. The terms each tool uses for those layers are different, but the layers are the same. You aren't a prompter anymore. You're the person making sure the right information is in the right place at the right time so the model can do something worth doing.
What It Looks Like in the Real Work
The shift from prompter to conductor changes what someone actually does at the keyboard. I show this every time I do a demo. I open a blank chat, drag in three documents (a customer brief, a past-work portfolio, a capability one-pager), and start a conversation. Summarize this. Go research the customer. Now look at our portfolio and tell me where we're strongest. Now draft me an executive summary using everything you have. Now sharpen it for this customer's vertical. Now what's one thing they might push back on?
Six turns in, the answer is sharper than anything a single prompt could have produced, and it's drawing on material that wasn't in the model's head when we started. Nothing about that workflow is clever phrasing. All of it is conducting. The draft got built because I knew what context to bring in, in what order, and where to redirect when the model drifted.
The same pattern repeats at scale. When teams build reusable assistants, they're really just deciding what context lives inside them. The assistants that work are the ones where someone thought hard about the role, the process, and the reference material before saving them. Standardized procedures sit a layer above that, where a workflow gets captured once and triggered automatically every time it's relevant. Every layer is another way of getting context into the right place before the model has to do anything.
A Glimpse Into the Future of Work
The reason the durability matters is bigger than how to get a better answer in a chat session. Conducting is the shape of knowledge work going forward.
Look at what's happening underneath the tools. Models have stopped being islands. They reach into Drive, into Slack, into SharePoint, into Jira, into Notion, into databases, into spreadsheets, into the file system on your desk. They can call other models. They can spawn other agents. They can run on a schedule whether you're at the keyboard or not. The work is no longer a single person typing into a single text box. It's a continuous flow of information moving across systems, with the model as one of several participants and the human as the one deciding what flows where and what counts as a finished answer.
That's what conducting actually describes at full scale. The conductor decides which sources to pull from. The conductor decides which agent does what step. The conductor decides when to integrate a result and when to send it back for revision. The conductor recognizes when the answer is converging on a real conclusion and when it's still chasing a tangent. The model can do extraordinary things in any single moment. None of those moments add up to a deliverable unless someone is shaping the flow.
The version of knowledge work that's coming is more like this than the version we grew up with. Less typing, more directing. Less "produce this document from scratch," more "bring information from these five systems together until it adds up to a defensible answer." Less individual contribution, more orchestration of contributors, human and otherwise. The work moves from generating output to converging on conclusions, and the people who can do that converging well are the ones whose judgment becomes the bottleneck and the differentiator.
The leaders of the next decade will be the ones who understand how to wield this. Not the people with the most access to AI. Not the people with the biggest budgets for tools. The ones who can sit at the center of a flow of information across systems, models, and teammates, and direct it toward conclusions that hold up. That's a leadership skill. It's also, in a quieter way, a craft skill, a frontline knowledge worker skill, an operator skill. Every level of an organization will reward the people who can do this, and penalize the people who can't.
This isn't a prediction from the sidelines. I'm working this way today, and so are a lot of the operators inside our client teams. None of the pieces are futuristic. They're available in tools anyone reading this can use. The more time I spend working this way, the clearer the shape of what's coming becomes, and the wider the gap looks between this and typing keywords into a search box.
The Standard
If you've been around a Gadoci Consulting session in the last two years, you've heard this in some form. It's the same idea I was working with at data.world, the same one that runs through every Gadoci engagement, the same one I'll deliver next week to a different audience in a different industry, using a different tool. The line between people who get value from AI and people who don't isn't about access, talent, time, or platform. It's about whether they've made the move from prompter to conductor.
That's the standard. Bring the right knowledge. Build the right context. Direct the orchestra. The good answers are on the other side of that, and so, increasingly, is the future of the work.