AI/ML API Documentation
API KeyModelsPlaygroundGitHubGet Support
  • πŸ“žContact Sales
  • πŸ—―οΈSend Feedback
  • Quickstart
    • 🧭Documentation Map
    • Setting Up
    • Supported SDKs
  • API REFERENCES
    • πŸ“’All Model IDs
    • Text Models (LLM)
      • Alibaba Cloud
        • qwen-max
        • qwen-plus
        • qwen-turbo
        • Qwen2-72B-Instruct
        • Qwen2.5-7B-Instruct-Turbo
        • Qwen2.5-72B-Instruct-Turbo
        • Qwen2.5-Coder-32B-Instruct
        • Qwen-QwQ-32B
        • Qwen3-235B-A22B
      • Anthracite
        • magnum-v4
      • Anthropic
        • Claude 3 Haiku
        • Claude 3.5 Haiku
        • Claude 3 Opus
        • Claude 3 Sonnet
        • Claude 3.5 Sonnet
        • Claude 3.7 Sonnet
      • Cohere
        • command-r-plus
      • DeepSeek
        • DeepSeek V3
        • DeepSeek R1
        • DeepSeek Prover V2
      • Google
        • gemini-1.5-flash
        • gemini-1.5-pro
        • gemini-2.0-flash-exp
        • gemini-2.0-flash
        • gemini-2.5-flash-preview
        • gemini-2.5-pro-exp
        • gemini-2.5-pro-preview
        • gemma-2
        • gemma-3
      • Gryphe
        • MythoMax-L2-13b-Lite
      • Meta
        • Llama-3-chat-hf
        • Llama-3-8B-Instruct-Lite
        • Llama-3.1-8B-Instruct-Turbo
        • Llama-3.1-70B-Instruct-Turbo
        • Llama-3.1-405B-Instruct-Turbo
        • Llama-3.2-11B-Vision-Instruct-Turbo
        • Llama-3.2-90B-Vision-Instruct-Turbo
        • Llama-Vision-Free
        • Llama-3.2-3B-Instruct-Turbo
        • Llama-3.3-70B-Instruct-Turbo
        • Llama-4-scout
        • Llama-4-maverick
      • MiniMax
        • text-01
        • abab6.5s-chat
      • Mistral AI
        • codestral-2501
        • mistral-nemo
        • mistral-tiny
        • Mistral-7B-Instruct
        • Mixtral-8x22B-Instruct
        • Mixtral-8x7B-Instruct
      • NVIDIA
        • Llama-3.1-Nemotron-70B-Instruct-HF
        • llama-3.1-nemotron-70b
      • NeverSleep
        • llama-3.1-lumimaid
      • NousResearch
        • Nous-Hermes-2-Mixtral-8x7B-DPO
      • OpenAI
        • gpt-3.5-turbo
        • gpt-4
        • gpt-4-preview
        • gpt-4-turbo
        • gpt-4o
        • gpt-4o-mini
        • gpt-4o-audio-preview
        • gpt-4o-mini-audio-preview
        • gpt-4o-search-preview
        • gpt-4o-mini-search-preview
        • o1
        • o1-mini
        • o1-preview
        • o3-mini
        • gpt-4.5-preview
        • gpt-4.1
        • gpt-4.1-mini
        • gpt-4.1-nano
        • o4-mini
      • xAI
        • grok-beta
        • grok-3-beta
        • grok-3-mini-beta
    • Image Models
      • Flux
        • flux-pro
        • flux-pro/v1.1
        • flux-pro/v1.1-ultra
        • flux-realism
        • flux/dev
        • flux/dev/image-to-image
        • flux/schnell
      • Google
        • Imagen 3.0
      • OpenAI
        • DALLΒ·E 2
        • DALLΒ·E 3
      • RecraftAI
        • Recraft v3
      • Stability AI
        • Stable Diffusion v3 Medium
        • Stable Diffusion v3.5 Large
    • Video Models
      • Alibaba Cloud
        • Wan 2.1 (Text-to-Video)
      • Google
        • Veo2 (Image-to-Video)
        • Veo2 (Text-to-Video)
      • Kling AI
        • v1-standard/image-to-video
        • v1-standard/text-to-video
        • v1-pro/image-to-video
        • v1-pro/text-to-video
        • v1.6-standard/text-to-video
        • v1.6-standard/image-to-video
        • v1.6-pro/image-to-video
        • v1.6-pro/text-to-video
        • v1.6-standard/effects
        • v1.6-pro/effects
        • v2-master/image-to-video
        • v2-master/text-to-video
      • Luma AI
        • Text-to-Video v2
        • Text-to-Video v1 (legacy)
      • MiniMax
        • video-01
        • video-01-live2d
      • Runway
        • gen3a_turbo
        • gen4_turbo
    • Music Models
      • MiniMax
        • minimax-music [legacy]
        • music-01
      • Stability AI
        • stable-audio
    • Voice/Speech Models
      • Speech-to-Text
        • stt [legacy]
        • Deepgram
          • nova-2
        • OpenAI
          • whisper-base
          • whisper-large
          • whisper-medium
          • whisper-small
          • whisper-tiny
      • Text-to-Speech
        • Deepgram
          • aura
    • Content Moderation Models
      • Meta
        • Llama-Guard-3-11B-Vision-Turbo
        • LlamaGuard-2-8b
        • Meta-Llama-Guard-3-8B
    • 3D-Generating Models
      • Stability AI
        • triposr
    • Vision Models
      • Image Analysis
      • OCR: Optical Character Recognition
        • Google
          • Google OCR
        • Mistral AI
          • mistral-ocr-latest
      • OFR: Optical Feature Recognition
    • Embedding Models
      • Anthropic
        • voyage-2
        • voyage-code-2
        • voyage-finance-2
        • voyage-large-2
        • voyage-large-2-instruct
        • voyage-law-2
        • voyage-multilingual-2
      • BAAI
        • bge-base-en
        • bge-large-en
      • Google
        • textembedding-gecko
        • text-multilingual-embedding-002
      • OpenAI
        • text-embedding-3-large
        • text-embedding-3-small
        • text-embedding-ada-002
      • Together AI
        • m2-bert-80M-retrieval
  • Solutions
    • Bagoodex
      • AI Search Engine
        • Find Links
        • Find Images
        • Find Videos
        • Find the Weather
        • Find a Local Map
        • Get a Knowledge Structure
    • OpenAI
      • Assistants
        • Assistant API
        • Thread API
        • Message API
        • Run and Run Step API
        • Events
  • Use Cases
    • Create Images: Illustrate an Article
    • Animate Images: A Children’s Encyclopedia
    • Create an Assistant to Discuss a Specific Document
    • Create a 3D Model from an Image
    • Create a Looped GIF for a Web Banner
    • Read Text Aloud and Describe Images: Support People with Visual Impairments
    • Find Relevant Answers: Semantic Search with Text Embeddings
    • Summarize Websites with AI-Powered Chrome Extension
  • Capabilities
    • Completion and Chat Completion
    • Streaming Mode
    • Code Generation
    • Thinking / Reasoning
    • Function Calling
    • Vision in Text Models (Image-To-Text)
    • Web Search
    • Features of Anthropic Models
    • Model comparison
  • FAQ
    • Can I use API in Python?
    • Can I use API in NodeJS?
    • What are the Pro Models?
    • How to use the Free Tier?
    • Are my requests cropped?
    • Can I call API in the asynchronous mode?
    • OpenAI SDK doesn't work?
  • Errors and Messages
    • General Info
    • Errors with status code 4xx
    • Errors with status code 5xx
  • Glossary
    • Concepts
  • Integrations
    • 🧩Our Integration List
    • Cline
    • Langflow
    • LiteLLM
    • Roo Code
Powered by GitBook
On this page
  • API
  • API Key
  • Base URL
  • Base64
  • Deprecation
  • Endpoint
  • Fine-tuned model
  • Multimodal Model
  • Prompt
  • Terminal
  • Token

Was this helpful?

  1. Glossary

Concepts

PreviousErrors with status code 5xxNextOur Integration List

Last updated 19 days ago

Was this helpful?

API

API stands for Application Programming Interface. In the context of AI/ML, an API serves as a "handle" that enables you to integrate and utilize any Machine Learning model within your application. Our API supports communication via HTTP requests and is fully backward-compatible with OpenAI’s API. This means you can refer to OpenAI’s documentation for making calls to our API. However, be sure to change the base URL to direct your requests to our servers and select the desired model from our offerings.

API Key

An API Key is a credential that grants you access to our API from within your code. It is a sensitive string of characters that should be kept confidential. Do not share your API key with anyone else, as it could be misused without your knowledge.

You can find your API key on the .

Base URL

The Base URL is the first part of the URL (including the protocol, domain, and pathname) that determines the server responsible for handling your request. It’s crucial to configure the correct Base URL in your application, especially if you are using SDKs from OpenAI, Azure, or other providers. By default, these SDKs are set to point to their servers, which are not compatible with our API keys and do not support many of the models we offer.

Our base URL also supports versioning, so you can use the following as well:

  • https://api.aimlapi.com

  • https://api.aimlapi.com/v1

Usually, you pass the base URL as the same field inside the SDK constructor. In some cases, you can set the environment variable BASE_URL, and it will work. If you want to use the OpenAI SDK, then follow the and take a closer look at how to use it with the AI/ML API.

Base64

Base64 is a way to encode binary data, such as files or images, into text format, making it safe to include in places like URLs or JSON requests.

In the context of working with AI models, this means that if a model expects a parameter like file_data or image_url, you can encode your local file or image as a Base64 string, pass it as the value for that parameter, and in most cases, the model will successfully receive and process your file. You’ll need to import the base64 library to handle file encoding. Below is a code example showing a real model call.

Code Example (Python): Providing an Image as a Base64 String
from openai import OpenAI
from pathlib import Path
import base64

# loading the picture
file_path = Path("C:/Users/user/Documents/example/images/racoons_0.png")

# Read and encode the image in base64
with open(file_path, "rb") as image_file:
    base64_image = base64.b64encode(image_file.read()).decode("utf-8")

# Create a data URL for the base64 image
image_data_url = f"data:image/png;base64,{base64_image}"

# Define an OpenAI client to call the model via OpepAI SDK
base_url = "https://api.aimlapi.com/"
api_key = "<YOUR_AIMLAPI_KEY>"

client = OpenAI(api_key=api_key, base_url=base_url)

# Send the image as Base64 to GPT-4o chat model
completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "user", "content": "What’s in this image?"},
            {
                "role": "user", "content":[ 
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": image_data_url
                         }
                    }
                ]
            }

        ],
    )

response = completion.choices[0].message.content
print(response)

Response:

The image depicts an illustrated raccoon by a stream, reaching into the water with its paw. The setting is natural, with rocks and greenery surrounding the stream.
Code Example (Python): Providing a PDF file as a Base64 String
import base64
from openai import OpenAI


aimlapi_key = "<YOUR_AIMLAPI_KEY>"

client = OpenAI(
    base_url = "https://api.aimlapi.com",
    api_key = aimlapi_key, 
)

def main():
    
    # Put your filename here. The file must be in the same folder as your Python script.
    your_file_name = "headers-example.pdf"

    with open(your_file_name, "rb") as f:
        data = f.read()

    # We encode the entire file into a single string to send it to the model
    base64_string = base64.b64encode(data).decode("utf-8")

    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "user",
                "content": [
                    {
                        # Sending our file to the model
                        "type": "file",
                        "file": {
                            "filename": your_file_name,
                            "file_data": f"data:application/pdf;base64,{base64_string}",
                        }
                    },
                    {
                        # Providing the model with instructions on how to process the uploaded file 
                        "type": "text",
                        "text": "Extract all the headers from this file, placing each on a new line",
                    },
                ],
            },
        ]
    )
    print(response.choices[0].message.content)

     

if __name__ == "__main__":
    main()

Response:

The Renaissance Era  
A New Dawn of Thought  
The Masters of Art  
Scientific Breakthroughs  
Legacy and Influence

Deprecation

Deprecation is the process where a provider marks a model, parameter, or feature as outdated and no longer recommended for use. Deprecated items may remain available for some time but are likely to be removed or unsupported in the future.

Users are encouraged to monitor deprecation notices carefully and update their integrations accordingly. We notify our users about such changes in our email newsletters.

Endpoint

A specific URL where an API can be accessed to perform an operation (e.g., generate a response, upload a file).

Fine-tuned model

A fine-tuned model is a base AI model that has been further trained on additional, specific data to specialize it for certain tasks or behaviors.

Multimodal Model

A model that can process and generate different types of data (text, images, audio) in a single interaction.

Prompt

The input given to a model to generate a response.

The parameter used to pass a prompt is most often called simply prompt:

Some Python code
json={
    "prompt": "slightly dim banner with abstract lines, base colors are coral, yellow and magenta",  # a prompt used for image generation
    "model": "flux/schnell",
    "image_size": {
        "width": 1536,
        "height": 640
} 

But there can be other variations. For example, the messages structure used in chat models passes the prompt within the content subfield. Depending on the value of the role parameter value, this prompt will be interpreted either as a user message (role: user) or as a model instruction (role: system or role: assistant).

Some Python code
"messages":[
    {
        "role":"system",
        "content":"you are a helpful assistant",#this prompt is an instruction
        "name":"text"
    },
    {
        "role":"user",
        "content":"Why is the ocean salty?" #this prompt is a user question
    }
],

There are also special parameters that allow you to refine prompts, control how strongly the model should follow them, or adjust the strictness of their interpretation.

  • prompt_optimizer or enhance_prompt: The model will automatically optimize the incoming prompt to improve the video generation quality if necessary. For more precise control, this parameter can be set to False, and the model will follow the instructions more strictly.

  • negative_prompt: The description of elements to avoid in the generated video/image/etc.

  • cfg_scale or guidance_scale: The Classifier Free Guidance (CFG) scale is a measure of how close you want the model to stick to your prompt.

  • strength: Determines how much the prompt influences the generated image.

Which of these parameters are supported by a specific model can be found in the API Schema section on that model's page.

Terminal

If you are not a developer or are using modern systems, you might be familiar with it only as a "black window for hackers." However, the terminal is a very old and useful way to communicate with a computer. The terminal is an app inside your operating system that allows you to run commands by typing strings associated with some program. Depending on the operating system, you can run the terminal in many ways. Here are basic ways that usually work:

  • On Windows: Press the combination Win + R and type cmd.

  • On Mac: Press Command + Space, search for Terminal, then hit Enter.

  • On Linux: You are probably already familiar with it. On Ubuntu with GUI, for example, you can type Ctrl + F, search for Terminal, then hit Enter.

Token

You can limit the model’s output using the max_completion_tokens parameter (the fully equivalent deprecated max_tokens parameter is still supported for now).

We'll send an image file from the local disk to the chat model by passing it through the image_url parameter as a Base64-encoded string. Our prompt will ask chat model to describe the contents of the image with the question: "What's in this image?"

We'll pass a local to the chat model via the file_data parameter, encoding it as a Base64 string. The prompt will ask chat model to extract and list all headers, one per line.

Deprecation can apply to an entire model (see ) or to individual parameters. For example, in a recent update to the video model by Kling AI, the aspect_ratio parameter was deprecated: the model now automatically determines the aspect ratio based on the properties of the provided reference image, and explicit aspect_ratio input is no longer required.

For example, an " model fine-tuned for content safety" means that the original Llama 3.2 model (with 11 billion parameters) has received extra training using datasets focused on safe and appropriate content generation.

A chunk of text (word, part of a word, or symbol) that text models use for processing inputs and outputs. The cost of using a text model is calculated based on the number of tokens processed. Both the text documents you send and the conversation history (in the case of interacting with an ) are tokenized (split into tokens) and included in the cost calculation.

account page
setting up article
gpt-4o
PDF file
gpt-4o
11B Llama 3.2
Assistant
v1.6-pro/image-to-video
our list of deprecated/no longer supported models