Few-shot learning is a prompting technique where you include a handful of input/output examples in the prompt itself, teaching the model the desired pattern by demonstration rather than by weight change. The model then completes the pattern on the new input.
Few-shot is dramatically more effective than zero-shot for niche formats, structured extraction, and brand voice. The cost is prompt length (and therefore token cost) per request. For high-volume production use, fine-tuning sometimes wins on cost; for experimentation and low-volume work, few-shot is faster.