arrow-left

All pages
gitbookPowered by GitBook
1 of 1

Loading...

gemini-2.5-flash-lite-preview

circle-info

This documentation is valid for the following list of our models:

  • google/gemini-2.5-flash-lite-preview

hashtag
Model Overview

The model excels at high-volume, latency-sensitive tasks like translation and classification.

circle-check

chevron-rightHow to make the first API callhashtag

1️⃣ Required setup (don’t skip this) ▪ Create an account: Sign up on the AI/ML API website (if you don’t have one yet). ▪ Generate an API key: In your account dashboard, create an API key and make sure it’s enabled in the UI.

2️ Copy the code example At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project.

3️ Update the snippet for your use case ▪ Insert your API key: replace <YOUR_AIMLAPI_KEY> with your real AI/ML API key. ▪ Select a model: set the model

hashtag
API Schema

hashtag
Code Example

chevron-rightResponsehashtag
field to the model you want to call. ▪
Provide input:
fill in the request input field(s) shown in the example (for example,
messages
for chat/LLM models, or other inputs for image/video/audio models).

4️ (Optional) Tune the request Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list.

5️ Run your code Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly.

circle-check

If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our Quickstart guide.

Try in Playground
Create AI/ML API Keyarrow-up-right
import requests
import json  # for getting a structured output with indentation 

response = requests.post(
    "https://api.aimlapi.com/v1/chat/completions",
    headers={
        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
        "Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type":"application/json"
    },
    json={
        "model":"google/gemini-2.5-flash-lite-preview",
        "messages":[
            {
                "role":"user",
                "content":"Hello"  # insert your prompt here, instead of Hello
            }
        ]
    }
)

data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))
async function main() {
  const response = await fetch('https://api.aimlapi.com/v1/chat/completions', {
    method: 'POST',
    headers: {
      // insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
      'Authorization': 'Bearer <YOUR_AIMLAPI_KEY>',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify({
      model: 'google/gemini-2.5-flash-lite-preview',
      messages:[
          {
              role:'user',
              content: 'Hello'  // insert your prompt here, instead of Hello
          }
      ],
    }),
  });

  const data = await response.json();
  console.log(JSON.stringify(data, null, 2));
}

main();
{
  "id": "gen-1752482994-9LhqM48PhAmhiRTtl2ys",
  "object": "chat.completion",
  "choices": [
    {
      "index": 0,
      "finish_reason": "stop",
      "logprobs": null,
      "message": {
        "role": "assistant",
        "content": "Hello there! How can I help you today?",
        "reasoning_content": null,
        "refusal": null
      }
    }
  ],
  "created": 1752482994,
  "model": "google/gemini-2.5-flash-lite-preview-06-17",
  "usage": {
    "prompt_tokens": 0,
    "completion_tokens": 9,
    "total_tokens": 9
  }
}
post
Body
modelstring · enumRequiredPossible values:
rolestring · enumRequired

The role of the author of the message — in this case, the user

Possible values:
contentany ofRequired

The contents of the user message.

stringOptional
or
itemsany ofOptional
typestring · enumRequired

The type of the content part.

Possible values:
textstringRequired

The text content.

or
typestring · enumRequired

The type of the content part.

Possible values:
file_datastringOptional

The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported. - Maximum size per file: Up to 512 MB and up to 2 million tokens. - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime. - Maximum total file storage per user: 10 GB.

filenamestringOptional

The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded.

namestringOptional

An optional name for the participant. Provides the model information to differentiate between participants of the same role.

or
rolestring · enumRequired

The role of the author of the message — in this case, the tool.

Possible values:
contentstringRequired

The contents of the tool message.

tool_call_idstringRequired

Tool call that this message is responding to.

namestring · nullableOptional

An optional name for the participant. Provides the model information to differentiate between participants of the same role.

or
rolestring · enumRequired

The role of the author of the message — in this case, the Assistant.

Possible values:
contentany ofOptional

The contents of the Assistant message. Required unless tool_calls or function_call is specified.

stringOptional

The contents of the Assistant message.

or
itemsany ofOptional
typestring · enumRequired

The type of the content part.

Possible values:
textstringRequired

The text content.

or
refusalstringRequired

The refusal message generated by the model.

typestring · enumRequired

The type of the content part.

Possible values:
namestringOptional

An optional name for the participant. Provides the model information to differentiate between participants of the same role.

idstringRequired

The ID of the tool call.

typestring · enumRequired

The type of the tool. Currently, only function is supported.

Possible values:
namestringRequired

The name of the function to call.

argumentsstringRequired

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

refusalstring · nullableOptional

The refusal message by the Assistant.

max_completion_tokensinteger · min: 1Optional

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

max_tokensnumber · min: 1Optional

The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

streambooleanOptional

If set to True, the model response data will be streamed to the client as it is generated using server-sent events.

Default: false
include_usagebooleanRequired
temperaturenumber · max: 2Optional

What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.

top_pnumber · min: 0.01 · max: 1Optional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

seedinteger · min: 1Optional

This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.

min_pnumber · min: 0.001 · max: 0.999Optional

A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k.

top_knumberOptional

Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.

repetition_penaltynumber · nullableOptional

A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition.

top_anumber · max: 1Optional

Alternate top sampling parameter.

typestring · enumRequired

The type of the tool. Currently, only function is supported.

Possible values:
descriptionstringOptional

A description of what the function does, used by the model to choose when and how to call the function.

namestringRequired

The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.

Other propertiesany · nullableOptional

The parameters the functions accepts, described as a JSON Schema object.

strictboolean · nullableOptional

Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True.

tool_choiceany ofOptional

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.

string · enumOptional

none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.

Possible values:
or
typestring · enumRequired

The type of the tool. Currently, only function is supported.

Possible values:
namestringRequired

The name of the function to call.

parallel_tool_callsbooleanOptional

Whether to enable parallel function calling during tool use.

reasoning_effortstring · enumOptional

Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.

Possible values:
Responses
chevron-right
200Success
idstringRequired

A unique identifier for the chat completion.

Example: chatcmpl-CQ9FPg3osank0dx0k46Z53LTqtXMl
objectstring · enumRequired

The object type.

Example: chat.completionPossible values:
creatednumberRequired

The Unix timestamp (in seconds) of when the chat completion was created.

Example: 1762343744
indexnumberRequired

The index of the choice in the list of choices.

Example: 0
rolestringRequired

The role of the author of this message.

Example: assistant
contentstringRequired

The contents of the message.

Example: Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?
refusalstring · nullableOptional

The refusal message generated by the model.

typestring · enumRequired

The type of the URL citation. Always url_citation.

Possible values:
end_indexintegerRequired

The index of the last character of the URL citation in the message.

start_indexintegerRequired

The index of the first character of the URL citation in the message.

titlestringRequired

The title of the web resource.

urlstringRequired

The URL of the web resource.

idstringRequired

Unique identifier for this audio response.

datastringRequired

Base64 encoded audio bytes generated by the model, in the format specified in the request.

transcriptstringRequired

Transcript of the audio generated by the model.

expires_atintegerRequired

The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations.

idstringRequired

The ID of the tool call.

typestring · enumRequired

The type of the tool.

Possible values:
argumentsstringRequired

The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

namestringRequired

The name of the function to call.

or
idstringRequired

The ID of the tool call.

typestring · enumRequired

The type of the tool.

Possible values:
inputstringRequired

The input for the custom tool call generated by the model.

namestringRequired

The name of the custom tool to call.

finish_reasonstring · enumRequired

The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool

Possible values:
bytesinteger[]Required

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprobnumberRequired

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

tokenstringRequired

The token.

bytesinteger[] · nullableOptional

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprobnumberRequired

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

tokenstringRequired

The token.

bytesinteger[]Required

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprobnumberRequired

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

tokenstringRequired

The token.

bytesinteger[] · nullableOptional

A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

logprobnumberRequired

The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

tokenstringRequired

The token.

modelstringRequired

The model used for the chat completion.

Example: google/gemini-2.5-flash-lite-preview
prompt_tokensnumberRequired

Number of tokens in the prompt.

Example: 137
completion_tokensnumberRequired

Number of tokens in the generated completion.

Example: 914
total_tokensnumberRequired

Total number of tokens used in the request (prompt + completion).

Example: 1051
accepted_prediction_tokensinteger · nullableOptional

When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.

audio_tokensinteger · nullableOptional

Audio input tokens generated by the model.

reasoning_tokensinteger · nullableOptional

Tokens generated by the model for reasoning.

rejected_prediction_tokensinteger · nullableOptional

When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.

audio_tokensinteger · nullableOptional

Audio input tokens present in the prompt.

cached_tokensinteger · nullableOptional

Cached tokens present in the prompt.

post
/v1/chat/completions
200Success
curl -L \
  --request POST \
  --url 'https://api.aimlapi.com/v1/chat/completions' \
  --header 'Authorization: Bearer <YOUR_AIMLAPI_KEY>' \
  --header 'Content-Type: application/json' \
  --data '{
    "model": "google/gemini-2.5-flash-lite-preview",
    "messages": [
      {
        "role": "user",
        "content": "Hello"
      }
    ]
  }'
{
  "id": "chatcmpl-CQ9FPg3osank0dx0k46Z53LTqtXMl",
  "object": "chat.completion",
  "created": 1762343744,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?",
        "refusal": null,
        "annotations": null,
        "audio": null,
        "tool_calls": null
      },
      "finish_reason": "stop",
      "logprobs": null
    }
  ],
  "model": "google/gemini-2.5-flash-lite-preview",
  "usage": {
    "prompt_tokens": 137,
    "completion_tokens": 914,
    "total_tokens": 1051,
    "completion_tokens_details": null,
    "prompt_tokens_details": null
  }
}