o3-mini

This documentation is valid for the following list of our models:

  • o3-mini

Model Overview

A model designed to excel in complex reasoning tasks, including mathematical problem-solving, programming challenges, and scientific inquiries. It integrates advanced reasoning capabilities.

How to Make a Call

Step-by-Step Instructions

1️ Setup You Can’t Skip

▪️ Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). ▪️ Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI.

2️ Copy the code example

At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.

3️ Modify the code example

▪️ Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key from your account. ▪️ Insert your question or request into the content field—this is what the model will respond to.

4️ (Optional) Adjust other optional parameters if needed

Only model and messages are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding API schema, which lists all available parameters along with notes on how to use them.

5️ Run your modified code

Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds.

API Schema

Generate a conversational response using a language model.

post

Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.

Authorizations
Body
modelundefined · enumRequiredPossible values:
max_completion_tokensinteger · min: 1Optional

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

Default: 512
max_tokensnumber · min: 1Optional

The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

Default: 512
streambooleanOptional

If set to True, the model response data will be streamed to the client as it is generated using server-sent events.

Default: false
tool_choiceany ofOptional

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.

string · enumOptional

none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.

Possible values:
or
parallel_tool_callsbooleanOptional

Whether to enable parallel function calling during tool use.

ninteger | nullableOptional

How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.

stopany ofOptional

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

stringOptional
or
string[]Optional
or
any | nullableOptional
seedinteger · min: 1Optional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

reasoning_effortstring · enumOptional

Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

Possible values:
response_formatone ofOptional

An object specifying the format that the model must output.

or
or
Responses
201Success
post
POST /v1/chat/completions HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 445

{
  "model": "o3-mini",
  "messages": [
    {
      "role": "user",
      "content": "text",
      "name": "text"
    }
  ],
  "max_completion_tokens": 512,
  "max_tokens": 512,
  "stream": false,
  "stream_options": {
    "include_usage": true
  },
  "tools": [
    {
      "type": "function",
      "function": {
        "description": "text",
        "name": "text",
        "parameters": null,
        "strict": true,
        "required": [
          "text"
        ]
      }
    }
  ],
  "tool_choice": "none",
  "parallel_tool_calls": true,
  "n": 1,
  "stop": "text",
  "seed": 1,
  "reasoning_effort": "low",
  "response_format": {
    "type": "text"
  }
}
201Success

No content

Code Example

import requests

response = requests.post(
    "https://api.aimlapi.com/v1/chat/completions",
    headers={
        "Content-Type":"application/json", 

        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
        "Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type":"application/json"
    },
    json={
        "model":"o3-mini",
        "messages":[
            {
                "role":"user",

                # Insert your question for the model here, instead of Hello:
                "content":"Hello"
            }
        ]
    }
)

data = response.json()
print(data)
Response
{'id': 'chatcmpl-BKKqDz4BBMnR8lWHTwwUiInJtdup0', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'message': {'role': 'assistant', 'content': 'Hello there! How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744186373, 'model': 'o3-mini-2025-01-31', 'usage': {'prompt_tokens': 16, 'completion_tokens': 2559, 'total_tokens': 2575, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 256, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_617f206dd9'}

Last updated

Was this helpful?