Context Engineering
Effective and optimal communication with LLMs
Context Engineering is the foundation of any interaction with large language models (LLMs). This practice controls how the Umbraco Developer MCP Server delivers precise, reliable, and efficient results. It shapes what the LLM sees and understands about your request to produce the best possible responses.
What is Context
At its most basic level, context is the conversation between you and a tool like ChatGPT. It includes:
The entire message history (your inputs and the model’s outputs).
The most recent system and user instructions that define the current topic or task.

LLMs are stateless — they don't support retaining information between individual requests. Each time you send a message, the entire context is sent to the model. This includes conversation history and any injected system data.
You can never fully control what an LLM with return. You can influence the outcome — and context is the only way that you do that.
What is Context Engineering
Context engineering is the practice of providing the LLM with only the information it needs to produce accurate and reliable results.
It’s about curating and managing what gets sent to the model:
Ensuring the context is correct, concise, and relevant for the current task.
Avoiding information overload — sending too much or contradictory context can confuse the model and lead to poor-quality responses or hallucinations.
Why Context Engineering Is So Important
In the early days of large language models (LLMs), the context of a conversation was straightforward — only your messages and the model's responses. Conversations were short and straightforward to follow, but even then, you could see context drift. As earlier parts of the discussion faded, the model's memory weakened and response quality declined.
Today, however, the landscape has changed dramatically.
Modern AI systems rely on increasingly complex and layered context, which include far more than only the user conversation. A single MCP-driven interaction may now contain:
A system prompt (the invisible instructions defining the model’s role and tone).
Rules or instruction files that constrain or enhance model behavior.
MCP definitions, which describe how external tools and data sources can be used during a conversation.
All of these elements must fit inside the model’s context window — the limited amount of information the model can “see” at once. The most advanced models today have larger but still finite context windows, so how you fill that space still matters.

If too much irrelevant, poorly structured, or contradictory information is included, useful parts of the context may get pushed out or forgotten. This leads to confusion, incomplete answers, or hallucinations. That’s why context engineering is more important now than ever — it’s about managing this limited space carefully and intentionally.
How this affects the Umbraco CMS developer MCP
In the Umbraco CMS Developer MCP (Model Context Protocol), context engineering is applied through structured tool contexts and well-defined prompts. This makes requests more effective, efficient, and more likely to succeed. It also makes prompts easier to write, reuse, and maintain.
Your choice of enabled tools directly shapes the quality of your context. By managing which tools and tool collections are active, you control how much information is sent to the model. This improves both performance and response reliability.
For more information, see Tool Collections
Last updated
Was this helpful?