MytheAi

Glossary entry

Hallucination

When an LLM produces output that is fluent and confident-sounding but factually wrong or invented.

Hallucination is the LLM failure mode where the model produces fluent, confident output that is not grounded in any real source. The model invents citations, fabricates statistics, or confabulates events that never happened.

Hallucination is reduced by RAG (grounding in retrieved sources), low temperature (less random sampling), explicit instructions to say "I do not know," and post-hoc verification against trusted sources. It is not eliminated in 2026; serious applications still require human review for any factual claim that matters.

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026