All Articles

The Enterprise "Brain" Is an Architecture Problem

Brandon Gadoci

Brandon Gadoci

April 21, 2026

Alex Lieberman posted something last week that captured where AI at work actually breaks down. "Someone is going to build a worldclass 'Brain' for enterprises and make a stupid amount of money," he wrote, quoting the engineer @da_fant: "coding with AI is solved because all context is in the git repo. Knowledge work is difficult because context is spread out. An AI system that creates a git repo with all of that context would be valuable."

The framing is exactly right. The reason coding assistants feel magical is not the model. It is the container. A repo is a durable, structured, version-controlled representation of everything the AI needs to act: every file, every change, every decision encoded in the history. An AI agent can reason about a codebase because someone already did the hard work of putting the context in one place and keeping it there.

Knowledge work has no such container. The context lives in Slack threads, buried email chains, two competing Notion workspaces, a legacy SharePoint that nobody cleans up, half a dozen shared drives, a few personal drives, meeting recordings that nobody watches, and most of all, the heads of three people who happen to know how something actually gets done. When a new AI tool shows up, there is nothing to point it at. No repo. No single source of truth. No commit history.

The first instinct most organizations have is to buy a product that claims to solve this. Every connector vendor, every "enterprise AI platform," every knowledge management suite is pitching some version of "we unify your context." Some of those products are genuinely useful. None of them, today, are the Brain. And buying more tools without first understanding what you actually need from them is how companies end up with three AI platforms that share 40% of their features and none of their data.

Our view, from the work we do with mid-market and enterprise clients, is that the Brain is not something you can buy right now. In the meantime, what you can do is build the architecture that will make it useful when it arrives. The companies that will benefit most from whatever gets shipped are the ones that treated their context like an asset before a product arrived to use it.

That work is unglamorous. It looks like taking inventory. Where does the knowledge actually live? Who owns the Notion pages that describe the RFP process? Is the pricing playbook a document, a spreadsheet, or an unwritten rule that three people negotiate on every deal? What decisions are trapped in calendar invites and email chains that nobody has the authority to turn into something more durable? You cannot feed AI what you have not organized yourself.

It also looks like discipline. Every time a team captures a process in a Skill, a SKILL.md file, or a Custom GPT, they are doing the quiet work of moving context from someone's head into a container. Every time a team writes a decision log instead of relying on meeting memory, they are creating the equivalent of a commit. These feel like small acts. Stacked over months, they become the difference between an organization an AI system can actually help and one where every agent query hits a wall of unstructured sprawl.

It also looks like resisting the temptation to solve context with tools alone. Connectors matter. Enterprise search matters. A clean data layer matters. None of it matters much when the underlying information is contradictory, stale, or locked in one person's inbox. We tell clients this directly: if your AI pilots are producing generic outputs, look at the context before you blame the model. You are asking a very sophisticated reader to read a library with no shelves. A recent piece on modular context gets at the same idea from a different angle. Skills, Projects, and Connectors are early building blocks of exactly this kind of architecture, and teams that treat them as throwaway productivity tricks miss the larger shape of what they are building.

The L1/L2/L3 framing we use with clients is useful here. Level 1 is individuals using AI tools like ChatGPT or Claude in their own work. Level 2 is team and department-level workflow automation. Level 3 is custom applications built on proprietary context. Every level gets meaningfully better when the context underneath it is organized, and every level gets worse, faster, when it is not. The Brain Lieberman is describing sits at Level 3. Most organizations are not yet ready to absorb it, because their Level 1 and Level 2 foundations are still missing. You cannot bolt a reasoning layer onto chaos and expect coherence to emerge.

So the takeaway is simpler than it sounds. Inventory your context. Put the highest-leverage knowledge into durable containers. Treat Skills, Projects, and Connectors as the first scaffolding of a company-wide memory, not as isolated productivity features. Make decisions about what belongs in the Brain and what does not, before a vendor makes those decisions for you.

Someone will build what Lieberman is describing. Whether it is Tenex, Anthropic, OpenAI, or a startup nobody has heard of yet, the shape is clear enough that the market will produce it. The question for any operator reading this is the one the tweet implicitly asks: when that product lands, will your organization have anything worth feeding it? That answer is being written every day, in every decision about where knowledge lives and who is responsible for it, whether the team realizes it or not.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.