All Articles

Expanding Context Windows: Ours and Theirs

Brandon Gadoci

Brandon Gadoci

February 11, 2026

One of the interesting phenomena I've noticed over four years of working with AI is that the increased capacity to do more comes with both a privilege and a burden. AI tools have compressed execution time so dramatically that the number of things you can realistically be involved in has expanded well beyond what was possible before. That's the privilege. The burden is that every one of those workstreams requires you to get back up to speed quickly when you return to it.

Context switching has become a skill in itself. Not the shallow kind where you're bouncing between Slack and email, but the deep kind where you're picking up a complex project you haven't touched in three days and need to make meaningful progress in the next hour. The ability to quickly reload where things stand, what decisions were made, and what needs to happen next is now one of the most valuable things a knowledge worker can develop.

If that sounds familiar, it should. It's the same problem LLMs are solving on their side.

Large language models operate within a context window, a finite amount of information they can hold and reason about at one time. When a conversation exceeds that window, the model loses track of earlier details. The entire AI industry is racing to expand these windows, because the more context a model can hold, the more useful it becomes. A model that remembers your full project history is dramatically more helpful than one that forgets what you said ten messages ago.

The parallel is hard to ignore. Both humans and AI tools are developing the same core capability for the same reason: hold more context, switch between contexts faster, and maintain coherence across a growing number of parallel workstreams. The models are getting better at it through engineering. We need to get better at it through discipline.

There's a chicken-and-egg quality to this that I find genuinely interesting. Are these tools a reflection of our own cognitive patterns, or are they reshaping how we work? Probably both. The architecture of LLMs was inspired by how humans process information, and now the demands of working alongside them are pushing us to become more like them in return. We're converging.

What does that discipline look like in practice? It starts with how you capture and organize information. The people who thrive in high-context-switching environments aren't the ones with the best memories. They're the ones with the best systems. Meeting notes structured so you can scan them in 30 seconds. Project documents that lead with current status and next actions, not background. Decision logs that tell you what was decided and why, so you don't re-litigate the same questions.

And there's an irony here too. The tools that created this problem are also the best tools for solving it. AI is excellent at summarizing where things stand, pulling together context from multiple sources, and getting you back up to speed. The people who figure out how to use AI not just for doing the work but for managing the context around the work will have a real advantage.

The context window is expanding on both sides. The models are getting better at holding more. The question is whether we are too.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.