MytheAi

Glossary entry

Context Window

The maximum number of tokens an LLM can read in a single request, including the prompt, files, and the response.

The context window is the upper bound on how much text an LLM can consider in one request. It includes the system prompt, the user prompt, any attached files, and the model's own response. Exceeding the window forces truncation or error.

In 2026 most frontier models offer 128K to 200K tokens by default and 1M tokens on enterprise tiers. This is large enough to feed an entire book or codebase in a single prompt. Quality at the long end varies: independent benchmarks show models often forget content in the middle of very long contexts, a phenomenon called "lost in the middle."

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026