MytheAi

Glossary entry

Fine-tuning

Continued training of a pre-trained model on a smaller domain-specific dataset to specialise its behaviour.

Fine-tuning takes a pre-trained foundation model and continues training on a smaller, curated dataset to teach domain-specific style, format, or knowledge. It is cheaper than training from scratch but more expensive than prompt engineering or RAG.

In 2026 fine-tuning is most useful for narrow style adaptation (brand voice, structured output formats) and for low-latency cases where prompt length matters. For factual knowledge updates, retrieval (RAG) usually beats fine-tuning because facts change but style does not.

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026