Our option for enabling maximum context windows, which will be slower and more expensive.
Normally, Cursor uses a context window of 128k tokens (~10,000 lines of code). Max Mode is our option to turn on the maximum context windows for all models. This will be a bit slower and more expensive. It is most relevant for Gemini 2.5 Pro and GPT 4.1, which have 1M token context windows.
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.