LLM Temperature

Tools & Environment
Temperature zeroTemperature
LLM Temperature controls response randomness: Temperature 0 produces deterministic results, higher values increase creativity and unpredictability.

LLM Temperature is a parameter that controls the randomness of generated responses. Temperature 0 produces the most deterministic, predictable results; the model always selects the most probable token. Temperature values above 1.0 increase randomness, producing more creative but less predictable responses.

In semantic audit pipelines, temperature is important: for analytical tasks (EAV extraction, URR classification, tool calling) use temperature 0-0.2 for maximum consistency. For creative tasks (brief generation, content writing) temperature 0.5-0.7 provides better variety. Never use temperature > 1.0 for SEO tasks — excessive randomness generates hallucinations and inconsistent results.

For example, EAV extraction from an article about probate (inheritance rights) at temp=0 produces the same triples every run, while temp=0.8 produces different ones each time. In practice, set temperature as a pipeline parameter rather than hardcoding it. You can adjust it for specific tasks without changing code.

Source: AI Semantic SEO Expert, Robert Niechciał (sensai.io)