MytheAi

Glossary entry

Prompt Injection

A class of attacks where untrusted input embedded in an LLM prompt overrides the system instructions.

Prompt injection happens when text controlled by an attacker (a user message, a retrieved document, a webpage in agent browsing) overrides the system instructions and gets the LLM to do something the developer did not intend. It is the SQL injection of the LLM era.

Direct injection is when the user types a manipulative prompt. Indirect injection is more dangerous: a malicious instruction hidden in a webpage or PDF that the LLM reads as part of normal RAG or agent operation. Defences in 2026 are imperfect and rely on input sanitisation, output validation, narrow tool permissions, and constitutional or instruction-tuned guardrails.

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026