Zero-shot learning is asking the model to perform a task using instructions alone, with no examples. "Translate this paragraph to French" is zero-shot. Frontier models in 2026 are strong zero-shot learners on common tasks because their training data covered many similar tasks.
Zero-shot fails when the task is unusual, the format is non-standard, or domain-specific style matters. In those cases few-shot or fine-tuning is required. The evolution of LLM capability is largely visible as zero-shot quality climbing on harder benchmarks year over year.