text-01

This documentation is valid for the following list of our models:

  • MiniMax-Text-01

Model Overview

A powerful language model developed by MiniMax AI, designed to excel in tasks requiring extensive context processing and reasoning capabilities. With a total of 456 billion parameters, of which 45.9 billion are activated per token, this model utilizes a hybrid architecture that combines various attention mechanisms to optimize performance across a wide array of applications.

How to Make a Call

Step-by-Step Instructions

1️ Setup You Can’t Skip

▪️ Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). ▪️ Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI.

2️ Copy the code example

At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.

3️ Modify the code example

▪️ Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key from your account. ▪️ Insert your question or request into the content field—this is what the model will respond to.

4️ (Optional) Adjust other optional parameters if needed

Only model and messages are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding API schema, which lists all available parameters along with notes on how to use them.

5️ Run your modified code

Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds.

API Schema

Generate a conversational response using a language model.

post

Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.

Authorizations
Body
modelundefined · enumRequiredPossible values:
max_tokensnumber · min: 1Optional

The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

Default: 512
streambooleanOptional

If set to True, the model response data will be streamed to the client as it is generated using server-sent events.

Default: false
tool_choiceany ofOptional

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.

string · enumOptional

none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.

Possible values:
or
parallel_tool_callsbooleanOptional

Whether to enable parallel function calling during tool use.

temperaturenumber · max: 2Optional

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

top_pnumber · min: 0.01 · max: 1Optional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

frequency_penaltynumber | nullableOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

presence_penaltynumber | nullableOptional

Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

seedinteger · min: 1Optional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

response_formatone ofOptional

An object specifying the format that the model must output.

or
or
mask_sensitive_infobooleanOptional

Mask (replace with ***) content in the output that involves private information, including but not limited to email, domain, link, ID number, home address, etc. Defaults to False, i.e. enable masking.

Default: false
Responses
201Success
post
POST /v1/chat/completions HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 526

{
  "model": "MiniMax-Text-01",
  "messages": [
    {
      "role": "user",
      "content": "text",
      "name": "text"
    }
  ],
  "max_tokens": 512,
  "stream": false,
  "stream_options": {
    "include_usage": true
  },
  "tools": [
    {
      "type": "function",
      "function": {
        "description": "text",
        "name": "text",
        "parameters": null,
        "strict": true,
        "required": [
          "text"
        ]
      }
    }
  ],
  "tool_choice": "none",
  "parallel_tool_calls": true,
  "temperature": 1,
  "top_p": 1,
  "frequency_penalty": 1,
  "prediction": {
    "type": "content",
    "content": "text"
  },
  "presence_penalty": 1,
  "seed": 1,
  "response_format": {
    "type": "text"
  },
  "mask_sensitive_info": false
}
201Success

No content

Code Example

import requests
import json  # for getting a structured output with indentation

response = requests.post(
    "https://api.aimlapi.com/v1/chat/completions",
    headers={
        "Content-Type":"application/json", 

        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
        "Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type":"application/json"
    },
    json={
        "model":"MiniMax-Text-01",
        "messages":[
            {
                "role":"user",

                # Insert your question for the model here, instead of Hello:
                "content":"Hello"
            }
        ]
    }
)

data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))
Response
{
  "id": "04a9c0b5acca8b79bf1aba62f288f3b7",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "message": {
        "role": "assistant",
        "content": "Hello! How are you doing today? I'm here and ready to chat about anything you'd like to discuss or help with any questions you might have."
      }
    }
  ],
  "created": 1750764981,
  "model": "MiniMax-Text-01",
  "usage": {
    "prompt_tokens": 299,
    "completion_tokens": 67,
    "total_tokens": 366
  }
}

Last updated

Was this helpful?