Temperature is a parameter that scales the probability distribution the LLM samples from when picking the next token. Temperature 0 always picks the most likely token (deterministic). Temperature 1 samples in proportion to the model's probabilities. Temperature 2 flattens the distribution toward random.
In practice: 0 to 0.3 for code, structured output, and factual answers; 0.7 to 1 for creative writing, brainstorming, and conversation. Most chat tools default around 0.7-1. Temperature does not improve accuracy at any setting; it only changes how varied the output is.