Hallucination is the LLM failure mode where the model produces fluent, confident output that is not grounded in any real source. The model invents citations, fabricates statistics, or confabulates events that never happened.
Hallucination is reduced by RAG (grounding in retrieved sources), low temperature (less random sampling), explicit instructions to say "I do not know," and post-hoc verification against trusted sources. It is not eliminated in 2026; serious applications still require human review for any factual claim that matters.