Open-source LLMs (more accurately "open-weight" models, since training data is rarely released) ship the model weights publicly so anyone can run inference locally. Llama (Meta), Mistral, DeepSeek, Qwen (Alibaba), and Gemma (Google) are leading families in 2026.
Open weights matter for privacy (no data leaving your network), cost predictability (run on your own hardware), and customisation (full fine-tuning, no API restrictions). Quality has closed the gap with frontier proprietary models on many benchmarks; the remaining gap is mostly multimodal capability and the largest reasoning models.