MytheAi

Glossary entry

RAG (Retrieval-Augmented Generation)

A pattern that retrieves relevant documents and feeds them to an LLM as context, instead of relying on the model's training data.

RAG is the standard pattern for grounding LLM output in fresh, private, or domain-specific knowledge. Instead of fine-tuning facts into model weights, you index documents in a vector database, retrieve the top-k relevant chunks at query time, and pass them to the LLM as context.

RAG is how every "chat with your docs" product works. It scales cheaply (just add more documents to the index), updates instantly (re-index changed content), and produces citable answers (return source links alongside the response). Quality depends heavily on the quality of chunking, embedding, and retrieval.

Tools that use RAG (Retrieval-Augmented Generation)

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

ยทHow we rank tools

See also: all 30 termsยทhow we researchยทLast reviewed 2026