Thinking / Reasoning
Last updated
Was this helpful?
Last updated
Was this helpful?
Some text models support advanced reasoning mode, enabling them to perform multi-step problem solving, draw inferences, and follow complex instructions. This makes them well-suited for tasks like code generation, data analysis, and answering questions that require understanding context or logic.
Sometimes, if you give the model a serious and complex task, generating a response can take quite a while. In such cases, you might want to use streaming mode to receive the answer word by word as it is being generated.
Special parameters, such as thinking
in Claude models, provide transparency into the model’s step-by-step reasoning process before it gives its final answer.
Supported models:
The standard way to control reasoning behavior in OpenAI models—and in some models from other providers—is through the reasoning_effort
parameter, which tells the model how much internal reasoning it should perform before responding to the prompt.
Accepted values are low
, medium
, and high
. Lower levels prioritize speed and efficiency, while higher levels provide deeper reasoning at the cost of increased token usage and latency. The default is medium
, offering a balance between performance and quality.
Supported models: