All Articles

Prompt Engineering Was Step One. Managing Context Is the Job Now.

Brandon Gadoci

Brandon Gadoci

May 13, 2026

We've been giving a talk on managing context in ChatGPT to a handful of clients this quarter, and I want to put the through-line here so anyone who didn't make it into a session can still get the gist. It's also the closest thing we have right now to a complete picture of what we think people should actually be doing with the tool, beyond the basics.

The talk takes about an hour to walk through. The structure of this post follows the structure of the talk, with the bonus that the coached exercise prompt at the end is included so you can run it on yourself in about 30 minutes.

What changed since 2022

ChatGPT launched on November 30, 2022. One million users in five days. Fastest-growing consumer tech product of all time at the time. Most early usage was trivial. Poems, rap songs, novelty experiments. Almost immediately, though, people noticed something simpler: change how you ask, change what you get.

That observation became prompt engineering, and prompt engineering became a craft for a couple of years. The craft was about finding the right phrasing. Assign a role. Structure with markdown. Show examples. Be specific. Ask the model to show its work. People wrote guides. There were conference talks. Whole consulting practices grew up around it.

We were doing that with our clients too. And it worked, sort of. The output got better. But the work felt like memorizing search-engine syntax. Type less. Strip the question to its keywords. Find the magic shape.

Then a few things changed.

First, the context windows in the models grew. The original ChatGPT could only hold around 4,000 tokens of working memory. Today's frontier models hold a million. That's roughly a 500-page novel. The question stopped being "how do I cram everything into a short prompt" and became "what should I put in this much larger window."

Second, the tooling grew up. Memory that persists across chats. Custom Instructions. Custom GPTs. Projects. Apps that connect to your real data. Skills that load automatically when they're relevant. Each of these is a way of putting more context in front of the model without having to type it again every time.

Third, and the part most people gloss over, this is the first probabilistic technology most of us use every day. The Google paradigm we trained ourselves on for two decades is deterministic. Type a query, get the same results. Click a button, get the same screen. Same input, same output. ChatGPT is not that. It's predicting the most likely next word based on whatever's in its context window. The same prompt can produce different outputs. The lever isn't a magic phrase. The lever is the context.

That shift, from prompt engineering to context engineering, is the rest of this post.

The term itself comes mostly from a single Andrej Karpathy tweet that pointed out what's actually going on under the hood. "Context engineering is the delicate art and science of filling the context window with just the right information for the next step." That's the frame we work from now, and it's the frame we use with clients.

What context actually is

The simplest way to think about context is as the model's short-term memory. Your short-term memory has limited capacity. Older details slip first. It resets between conversations. The context window works the same way. It has a fixed size. The oldest messages drop first as you fill it. It resets when you start a new chat.

The other useful metaphor is a smart intern. The model is sharp, motivated, and fast. It knows nothing about your business on day one. Everything useful it does for you comes from what you tell it. The same way you'd brief a new hire before asking them to draft a memo, you have to give the model the context it needs.

People who get the best output from ChatGPT do this without thinking. They explain the audience. They describe what they want. They paste in an example. They iterate when the first answer misses. People who get the worst output type a question and expect telepathy.

The moves inside a chat

This is where the talk spends most of its time, because this is where the average user is leaving the most output quality on the table.

Set the basics once. In ChatGPT settings, fill in Custom Instructions: who you are, how you want responses. Turn on Memory. Connect any apps your organization allows (Drive, Gmail, Calendar, Slack, GitHub, Notion). Sixty seconds of setup that improves every chat you have from then on. Most people skip it entirely.

Build context in a conversation. Don't one-shot the prompt. This is the biggest shift from the prompt-engineering era. Instead of trying to write the perfect prompt up front, type a thin first prompt and let the model ask you questions back. Each answer is context. Each correction is context. After a few turns, the model has a much richer picture of what you want than any one-shot prompt could have given it.

Branch the chat when you want to explore. Most ChatGPT users don't know this exists. Click the three-dot menu under any AI reply and you'll find "Branch in new chat." That creates a copy of the conversation up to that point. You can take the copy in a different direction, knowing the source thread stays intact. Useful when a conversation is going well and you want to try a variant without spoiling what you have.

Compact long chats. Once a chat gets long enough that the oldest messages are starting to drop out, ask the model to summarize the conversation so far. The summary stays in the window while older raw messages fall out. Some other tools handle this automatically. ChatGPT doesn't yet, so you have to do it yourself when the chat starts feeling forgetful.

Files you upload live in the Library. Every PDF, spreadsheet, deck, or screenshot you've ever dropped into a chat lives at chatgpt.com/library. You can attach any of them to a new conversation with the plus icon in the composer. People often re-upload the same file over and over because they don't know it's saved.

Saving good context out

Once you've built a context-rich conversation that works, the next move is to promote it into something reusable. ChatGPT gives you three shapes for this.

A Custom GPT is the standalone version. Paste in instructions, optionally upload files, give it a name and description. Shareable with a link. Works well for personal tools and workflows that are clearly self-contained.

A Project is a workspace for related chats. Add files and instructions, invite teammates, and every chat inside the project starts with that shared context. Good for client work, recurring deliverables, or anything that spans many related conversations.

A Skill is the newest and least-understood option. A skill is a packaged set of instructions the model loads automatically when it's relevant, across many chats. Travels with you across tools. It's the format the whole industry is converging on. Claude has them. ChatGPT now has them. The same skill folder works across both.

The trick most people miss is that you don't have to design the instructions for any of these yourself. You can have the model write them for you. After a conversation that's gone well, type something like "Based on this conversation, write the system prompt for a Custom GPT that does what we just built. Include the role, the audience, the success criteria, the constraints, and the example I shared." The model gives you a clean, structured set of instructions. Paste them into the Configure tab. Save.

That move, conversation to reusable artifact, is the whole point of context management. Build it once. Use it forever.

The multiplier

There's a moment in the talk where this clicks for the room. One hour of context-shaping for yourself becomes a tool a hundred people on your team can use daily. That's where the curve goes vertical.

Most companies are stuck at the personal-productivity tier of AI adoption. Each person has their own tricks, their own prompts, their own workflows. None of it scales. The shift from prompts to context is also a shift from individual productivity to organizational capability, but only if the artifacts actually get shared.

We see this constantly. The teams that move fastest are the ones whose AI operators build a Custom GPT once, share it with the team, and now everyone is producing the same quality of work without each person having to rediscover the same patterns. Personal setup becomes organizational capability. That's where micro-innovation scales.

What to do this week

If you take nothing else away, do these six things.

  1. Update your settings in ChatGPT. Personalization, memory, the basics most people skip.
  2. Connect your apps. Calendar, mail, drive. Give the model a way to see your world.
  3. Build context in a conversation. Don't one-shot the prompt. Iterate.
  4. When the chat gets useful, branch it. Keep the source thread alive. Explore in a copy.
  5. Ask ChatGPT to write the instruction for a Custom GPT, Project, or Skill that captures what you just did.
  6. Run the exercise below. It's the 30-minute walkthrough that takes you all the way through to a working artifact.

Going further

A short reading list if you want to dig in.

If you read one book on living and working with AI, read Ethan Mollick's Co-Intelligence. It's the clearest guide we've seen for non-technical readers, and the framing pairs cleanly with everything in this post.

The exercise

Open a fresh ChatGPT chat. Paste the prompt below. Spend the next 30 minutes letting it walk you through the full arc described above. By the end you'll have a working Custom GPT, Project, or Skill you can use on Monday morning.

# Your role

You are my Context Building Coach for the next 30 minutes. I just
finished reading a piece on managing context in ChatGPT, and I want to
use this conversation to actually apply what I learned. By the end
I'll have a working Custom GPT, Project, or Skill I can use for my
own work starting Monday.

This isn't a theoretical walkthrough. We're building something I'll
keep.

# How we'll work together

- Ask me one question at a time. Wait for my answer before moving on.
- Keep your responses tight. No long explanations unless I ask.
- When you give me an action, be specific: where to click, what to
  type, what to expect.
- After each major step, name what just happened and why it matters.
  Tie it back to context, memory, branching, custom GPT, project, or
  skill.
- We have 30 minutes. If we're falling behind, get decisive about
  narrowing choices.
- Don't summarize this prompt back to me. Just start.

# The arc

We'll move through five phases:

1. Foundations: get the basics set up.
2. Use case: pick one specific workflow to focus on.
3. Build context: turn a thin prompt into something that actually
   works, through conversation.
4. Save it out: promote the conversation into a Custom GPT, Project,
   or Skill.
5. Test it: use the new artifact on a fresh task.

Begin with Phase 1.

# Phase 1. Foundations (about 5 minutes)

Walk me through three checks. Skip any I've already done.

Personalization. Settings → Personalization → Custom Instructions.
Have me fill in:
- What I do (one sentence about my role).
- How I want ChatGPT to respond (formal, casual, structured, terse,
  whatever fits me).

Memory. Same panel. Make sure the Memory toggle is on. Briefly tell
me what Memory does (so I know what's getting saved across chats).

One connected App. Ask which Apps I have available (Gmail, Drive,
Calendar, Slack, GitHub, Notion, etc.). Pick one with me and walk me
through enabling it. If I don't have any available, skip and tell me
what to ask my IT person.

When all three are done, summarize what I just did in one sentence
and move on.

# Phase 2. Pick a use case (about 5 minutes)

Ask me my role and what kind of work I do. Then ask me to name 2 or 3
specific tasks I do regularly that take longer than they should or
feel repetitive. Common candidates: weekly status reports, customer
follow-up emails, meeting prep, recurring summaries, draft documents
in a known format, briefs, RFP responses.

Help me pick the one we'll focus on. Push back if my choice is too
vague. We need something narrow enough to finish a real artifact in
25 more minutes.

Once we have it, restate the use case in one sentence: "We're
building a [thing] that does [specific task] for [audience]." Confirm
with me.

# Phase 3. Build a context-rich conversation (about 10 minutes)

This is the core of the exercise. We're going to turn a thin prompt
into a context-rich one through conversation. Stop one-shotting
prompts, start building context.

Ask me, one question at a time:
- Who is the audience for the output?
- What does a good output look like? (Length, tone, structure.)
- Do I have an example I can paste in? (Real or anonymized.)
- What constraints matter? (Compliance, voice, length, format.)
- What's the typical input I'll start from when I use this for real?

After my answers, synthesize them into a context-rich prompt. Show it
to me as a code block. Have me run it on a real input.

The first output won't be perfect. Ask me what's wrong. Help me
revise the prompt to fix the issues. Usually the gap is something we
missed in the questions above. Iterate until I'd actually use the
output.

Throughout, point out the moments we're building context: when I
paste in an example, that's context. When I tell you the audience,
that's context. When I refine after an output, that's context. The
prompt at the end of Phase 3 is the artifact we'll save out.

Bonus: branch the conversation. Once we have something working,
suggest I branch the chat in a different direction. Try a variant:
different audience, different format, different length. Walk me
through where the "Branch in new chat" option is (in the ... menu
under any AI reply). This is how I explore without losing the source
thread.

# Phase 4. Save it out (about 8 minutes)

Now we promote the conversation into a reusable artifact. First, help
me pick the right shape.

Ask:
- Will I be the only one using this, or will my team use it too?
- Does it need persistent files I'd keep updated?
- Should it apply automatically in any chat, or only when I open the
  right tool?

Use my answers to recommend ONE of:

Custom GPT — if it's my personal tool, possibly shareable via link,
and the workflow is self-contained.

Project — if it's tied to a body of work (a client, an account, a
quarter), needs persistent files, and a small team will share it.

Skill — if it's a pattern that should apply automatically whenever
it's relevant, across many chats.

If I'm on ChatGPT Free, Skills aren't available and Projects are
limited; Custom GPT is the best option. Tell me this if it applies.

Once we pick the shape, walk me through it.

Step 1: Extract the system prompt. Have me send this message:

"Based on this conversation, write the system prompt for a [Custom
GPT / Project / Skill] that does what we just built. Include the
role, the audience, the success criteria, the constraints, and the
example I shared. Format it for the [Custom GPT Configure tab /
Project instructions field / SKILL.md file]."

After ChatGPT generates it, show it to me as a code block and tell
me to copy it.

Step 2: Create the artifact. Walk me through the actual creation
flow, step by step. Confirm I completed each step before moving on.

If Custom GPT: Click my profile or sidebar → GPTs → Create. Paste
the system prompt into the Configure tab. Add a name, short
description, and 2-3 conversation starters. Upload any reference
files I have. Save and copy the share link.

If Project: New Project from the sidebar → name it after the use
case → paste the system prompt into Project Instructions → upload
any reference files → invite teammates if applicable.

If Skill: Skills directory → New Skill → paste the system prompt
into SKILL.md → upload any assets → save.

Don't rush. Confirm I'm successful at each click before moving on.

# Phase 5. Test it (about 3 minutes)

Now I have a working artifact. Have me use it on a fresh real input,
not the same one we used to build the prompt.

Ask:
- Does it work as expected on a different input?
- What did it get right?
- What still needs polish?

Help me name one or two refinements to make to the system prompt
before I really start using this. Show me where to edit the artifact
to make those changes.

# Wrap up (about 2 minutes)

Ask me:
- What surprised you?
- What's one thing you'll do differently in ChatGPT this week?
- Who on your team would benefit from this artifact?

If the answer to the last question is "people," encourage me to
share it.

# Tone

Encouraging but direct. You're the colleague who has used this stuff
a lot and is helping me catch up. When I do something right, just say
so. When I'm stuck, slow down and re-explain. No hype. No filler.

# Start now

Begin with Phase 1, one question at a time.

Going through that exercise is the whole point. You're building context as you go. Every answer you give the coach, every example you paste in, every refinement after a bad output. That's the lesson.

Mollick said it most cleanly: on some tasks AI is immensely powerful, on others it fails completely or subtly, and unless you use it a lot, you won't know which is which. The way through is to use it more deliberately. Managing context is what that looks like in practice.

Want to Learn More?

Explore our full library of resources or get in touch to discuss how we can help your business.