Fine-tuning takes a pre-trained foundation model and continues training on a smaller, curated dataset to teach domain-specific style, format, or knowledge. It is cheaper than training from scratch but more expensive than prompt engineering or RAG.
In 2026 fine-tuning is most useful for narrow style adaptation (brand voice, structured output formats) and for low-latency cases where prompt length matters. For factual knowledge updates, retrieval (RAG) usually beats fine-tuning because facts change but style does not.