o1
Last updated
Was this helpful?
Last updated
Was this helpful?
A state-of-the-art language model designed to excel in complex reasoning tasks, including mathematical problem-solving, programming challenges, and scientific inquiries. The model integrates advanced reasoning capabilities through its innovative architecture, making it suitable for a wide range of applications that require deep understanding and logical deduction.
Only model
and messages
are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding , which lists all available parameters along with notes on how to use them.
If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our .
Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
512
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
512
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
false
Possible values: An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
Whether to enable parallel function calling during tool use.
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
An object specifying the format that the model must output.
No content