o4-mini
Last updated
Was this helpful?
Last updated
Was this helpful?
The newest small model in our o-series lineup, built for speed and smart reasoning, with outstanding efficiency in both coding and visual tasks.
Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI.
At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.
Only model
and messages
are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding API schema, which lists all available parameters along with notes on how to use them.
If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our Quickstart guide.
import requests
response = requests.post(
"https://api.aimlapi.com/v1/chat/completions",
headers={
"Content-Type":"application/json",
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
"Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
"Content-Type":"application/json"
},
json={
"model":"openai/o4-mini-2025-04-16",
"messages":[
{
"role":"user",
# Insert your question for the model here, instead of Hello:
"content":"Hello"
}
]
}
)
data = response.json()
print(data)
Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
512
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
512
If set to True, the model response data will be streamed to the client as it is generated using server-sent events.
false
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
Whether to enable parallel function calling during tool use.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
An object specifying the format that the model must output.
POST /v1/chat/completions HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 465
{
"model": "openai/o4-mini-2025-04-16",
"messages": [
{
"role": "user",
"content": "text",
"name": "text"
}
],
"max_completion_tokens": 512,
"max_tokens": 512,
"stream": false,
"stream_options": {
"include_usage": true
},
"tools": [
{
"type": "function",
"function": {
"description": "text",
"name": "text",
"parameters": null,
"strict": true,
"required": [
"text"
]
}
}
],
"tool_choice": "none",
"parallel_tool_calls": true,
"temperature": 1,
"n": 1,
"seed": 1,
"reasoning_effort": "low",
"response_format": {
"type": "text"
}
}
No content