Working with Context
How to work with context in Cursor
First, what is a context window? And how does it relate to effectively coding with Cursor?
To zoom out a bit, a large language model (LLM) is an artificial intelligence model trained to predict and generate text by learning patterns from massive datasets. It powers tools like Cursor by understanding your input and suggesting code or text based on what it’s seen before.
Tokens are the inputs and outputs of these models. They are chunks of text, often a fragment of a word, that an LLM processes one-by-one. Models don’t read entire sentences at once; they predict the next token based on the ones that came before.
To see how some text tokenizes, you can use a tokenizer like this one.
What is context?
When we’re generating a code suggestion in Cursor, “context” refers to the information that is provided to the model (in the form of “input tokens”) that the model then uses to predict the subsequent information (in the form of “output tokens”).
There are two types of context:
- Intent context defines what the user wants to get out of the model. For example, a system prompt usually serves as high-level instructions for how the user wants the model to behave. Most of the “prompting” done in Cursor is intent context. “Turn that button from blue to green” is an example of stated intent; it is prescriptive.
- State context describes the state of the current world. Providing Cursor with error messages, console logs, images, and chunks of code are examples of context related to state. It is descriptive, not prescriptive.
Together, these two types of context work in harmony by describing the current state and desired future state, enabling Cursor to make useful coding suggestions.
Providing context in Cursor
The more relevant context you can provide a model, the more useful it will be. If insufficient context is provided in Cursor, the model will try to solve it without the relevant information. This typically results in:
- Hallucinations where the model tries to pattern match (when there’s no pattern) causing unexpected results. This can happen frequently for models like
claude-3.5-sonnet
when it’s not given enough context. - The Agent trying to gather context by itself by searching the codebase, reading files, and calling tools. A strong thinking model (like
claude-3.7-sonnet
) can go quite far with this strategy, and providing the right initial context will determine the trajectory.
The good news is that Cursor is built with contextual awareness at its core and is designed to require minimal intervention from the user. Cursor automatically pulls in the parts of your codebase that the model estimates are relevant, such as the current file, semantically-similar patterns in other files, and other information from your session.
However, there’s a lot of context that can be pulled from, so manually specifying the context that you know is relevant to the task is a helpful way to steer the models in the right direction.
@-symbol
The easiest way to provide explicit context is with the @-symbol. These are great when you know specifically what file, folder, website, or other piece of context you want to include. The more specific you can be, the better. Here’s a breakdown of how to get more surgical with context:
Symbol | Example | Use case | Drawback |
---|---|---|---|
@code | @LRUCachedFunction | You know which function, constant or symbol is relevant to the output you’re generating | Requires a lot of knowledge of codebase |
@file | cache.ts | You know which file should be read or edited, but not exactly where in the file | Might include a lot of irrelevant context for the task at hand depending on file size |
@folder | utils/ | Everything or majority of files in a folder is relevant | Might include a lot of irrelevant context for the task at hand |
Rules
You should think of rules as long-term memory that you want you or other members of your team to have access to. Capturing domain-specific context, including workflows, formatting and other conventions, is a great starting point for writing rules.
Rules can also be generated from existing conversations by using /Generate Cursor Rules
. If you’ve had a long back and forth conversation with lots of prompting, there’s probably some useful directives or general rules that you might want to reuse later.
MCP
Model Context Protocol is an extensibility layer where you can give Cursor capabilities to perform actions and pull in external context.
Depending on your development setup, you might want to leverage different types of servers, but two categories that we’ve seen be particularly useful are:
- Internal documentation: e.g., Notion, Confluence, Google Docs
- Project management: e.g., Linear, Jira
If you have existing tooling for accessing context and performing actions through an API, you can build an MCP server for it. Here’s a short guide on how to build them: https://modelcontextprotocol.io/tutorials/building-mcp-with-llms.
Self-gathering context
A powerful pattern many users are adopting is letting the Agent write short-lived tools that it can then run to gather more context. This is especially effective in human-in-the-loop workflows where you review the code before it’s executed.
For example, adding debugging statements to your code, running it, and letting the model inspect the output gives it access to dynamic context it couldn’t infer statically.
In Python, you can do this by prompting the Agent to:
- Add print(“debugging: …”) statements in relevant parts of the code
- Run the code or tests using the terminal
The Agent will read the terminal output and decide what to do next. The core idea is to give the Agent access to the actual runtime behavior, not just the static code.
Takeaways
- Context is the foundation of effective AI coding, consisting of intent (what you want) and state (what exists). Providing both helps Cursor make accurate predictions.
- Use surgical context with @-symbols (@code, @file, @folder) to guide Cursor precisely, rather than relying solely on automatic context gathering.
- Capture repeatable knowledge in rules for team-wide reuse, and extend Cursor’s capabilities with Model Context Protocol to connect external systems.
- Insufficient context leads to hallucinations or inefficiency, while too much irrelevant context dilutes the signal. Strike the right balance for optimal results.