MytheAi

Glossary entry

Temperature

A model parameter from 0 to ~2 that controls how random the LLM's token sampling is. Lower temperature equals more deterministic output.

Temperature is a parameter that scales the probability distribution the LLM samples from when picking the next token. Temperature 0 always picks the most likely token (deterministic). Temperature 1 samples in proportion to the model's probabilities. Temperature 2 flattens the distribution toward random.

In practice: 0 to 0.3 for code, structured output, and factual answers; 0.7 to 1 for creative writing, brainstorming, and conversation. Most chat tools default around 0.7-1. Temperature does not improve accuracy at any setting; it only changes how varied the output is.

Related terms

Written by

John Ethan

Founder & Editor-in-Chief

Founder of MytheAi. Tracking and reviewing AI and SaaS tools since January 2026. Built MytheAi out of frustration with pay-to-rank listicles and SEO-driven AI directories that prioritize ad revenue over honest guidance. Hands-on testing across 500+ tools to date.

·How we rank tools

See also: all 30 terms·how we research·Last reviewed 2026