
Context engineering is the practice of designing systems that determine what information an AI model sees before it generates a response to user input. It goes beyond formatting prompts or crafting instructions, instead shaping the entire environment the model operates in: grounding data, schemas, tools, constraints, policies, and the mechanisms that decide which pieces of information make it into the model’s input at any moment. In applied terms, good context engineering means establishing a small set of high-signal tokens that improve the likelihood of a high-quality outcome.
Think of prompt engineering as a predecessor discipline to context engineering. While prompt engineering focuses on wording, sequencing, and surface-level instructions, context engineering extends the discipline into architecture and orchestration. It treats the prompt as just one layer in a larger system that selects, structures, and delivers the right information in the right format so that an LLM can plausibly accomplish its assigned task.
What does ‘context’ mean in AI?
In AI systems, context refers to everything an a large language model (LLM) has access to when producing a response — not just the user’s latest query, but the full envelope of information, rules, memory, and tools that shape how the model interprets that query. The total amount of information the system can process at once is called the context window. The context consists of a number of different layers that work together to guide model behavior:

