MytheAi

Glossary entry

LoRA (Low-Rank Adaptation)

A fine-tuning method that trains small "adapter" matrices instead of updating full model weights, dramatically cheaper than full fine-tuning.

LoRA fine-tunes a model by training small low-rank adapter matrices that get added to the existing weights, rather than updating the full weight matrix. The adapter is typically less than 1% of the size of the full model.

LoRA dramatically lowers the cost of fine-tuning (a few hours on a single GPU instead of weeks on a cluster) and lets you swap personalities or styles by swapping adapters. It is the dominant fine-tuning technique in 2026 for open-source models. Stable Diffusion communities use LoRAs heavily for character and style control.

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026