messagesimport requests
import json # for getting a structured output with indentation
response = requests.post(
"https://api.aimlapi.com/v1/chat/completions",
headers={
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
"Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
"Content-Type":"application/json"
},
json={
"model":"google/gemini-2.5-flash-lite-preview",
"messages":[
{
"role":"user",
"content":"Hello" # insert your prompt here, instead of Hello
}
]
}
)
data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))async function main() {
const response = await fetch('https://api.aimlapi.com/v1/chat/completions', {
method: 'POST',
headers: {
// insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
'Authorization': 'Bearer <YOUR_AIMLAPI_KEY>',
'Content-Type': 'application/json',
},
body: JSON.stringify({
model: 'google/gemini-2.5-flash-lite-preview',
messages:[
{
role:'user',
content: 'Hello' // insert your prompt here, instead of Hello
}
],
}),
});
const data = await response.json();
console.log(JSON.stringify(data, null, 2));
}
main();{
"id": "gen-1752482994-9LhqM48PhAmhiRTtl2ys",
"object": "chat.completion",
"choices": [
{
"index": 0,
"finish_reason": "stop",
"logprobs": null,
"message": {
"role": "assistant",
"content": "Hello there! How can I help you today?",
"reasoning_content": null,
"refusal": null
}
}
],
"created": 1752482994,
"model": "google/gemini-2.5-flash-lite-preview-06-17",
"usage": {
"prompt_tokens": 0,
"completion_tokens": 9,
"total_tokens": 9
}
}The role of the author of the message — in this case, the user
The contents of the user message.
The type of the content part.
The text content.
Either a URL of the image or the base64 encoded image data.
Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The role of the author of the message — in this case, the system.
The contents of the system message.
The type of the content part.
The text content.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The role of the author of the message — in this case, the tool.
The contents of the tool message.
Tool call that this message is responding to.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The role of the author of the message — in this case, the Assistant.
The contents of the Assistant message. Required unless tool_calls or function_call is specified.
The contents of the Assistant message.
The type of the content part.
The text content.
The refusal message generated by the model.
The type of the content part.
An optional name for the participant. Provides the model information to differentiate between participants of the same role.
The ID of the tool call.
The type of the tool. Currently, only function is supported.
The name of the function to call.
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
The refusal message by the Assistant.
An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
If set to True, the model response data will be streamed to the client as it is generated using server-sent events.
falseHow many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs.
What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
The type of the predicted content you want to provide.
The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly.
The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes.
The type of the content part.
The text content.
Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
An object specifying the format that the model must output.
The type of response format being defined. Always text.
The type of response format being defined. Always json_object.
The type of response format being defined. Always json_schema.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True.
A description of what the response format is for, used by the model to determine how to respond in the format.
The type of the tool. Currently, only function is supported.
A description of what the function does, used by the model to choose when and how to call the function.
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The parameters the functions accepts, described as a JSON Schema object.
Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True.
Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools.
The type of the tool. Currently, only function is supported.
The name of the function to call.
Whether to enable parallel function calling during tool use.
Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
A unique identifier for the chat completion.
chatcmpl-CQ9FPg3osank0dx0k46Z53LTqtXMlThe object type.
chat.completionPossible values: The Unix timestamp (in seconds) of when the chat completion was created.
1762343744The index of the choice in the list of choices.
0The role of the author of this message.
assistantThe contents of the message.
Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?The refusal message generated by the model.
The type of the URL citation. Always url_citation.
The index of the last character of the URL citation in the message.
The index of the first character of the URL citation in the message.
The title of the web resource.
The URL of the web resource.
Unique identifier for this audio response.
Base64 encoded audio bytes generated by the model, in the format specified in the request.
Transcript of the audio generated by the model.
The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations.
The ID of the tool call.
The type of the tool.
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
The name of the function to call.
The ID of the tool call.
The type of the tool.
The input for the custom tool call generated by the model.
The name of the custom tool to call.
The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
The token.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
The token.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
The token.
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.
The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.
The token.
The model used for the chat completion.
google/gemini-2.5-flash-lite-previewNumber of tokens in the prompt.
137Number of tokens in the generated completion.
914Total number of tokens used in the request (prompt + completion).
1051When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.
Audio input tokens generated by the model.
Tokens generated by the model for reasoning.
When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.
Audio input tokens present in the prompt.
Cached tokens present in the prompt.
The number of tokens consumed during generation.
120000The total amount of money spent by the user in USD.
0.06curl -L \
--request POST \
--url 'https://api.aimlapi.com/v1/chat/completions' \
--header 'Authorization: Bearer <YOUR_AIMLAPI_KEY>' \
--header 'Content-Type: application/json' \
--data '{
"model": "google/gemini-2.5-flash-lite-preview",
"messages": [
{
"role": "user",
"content": "Hello"
}
]
}'{
"id": "chatcmpl-CQ9FPg3osank0dx0k46Z53LTqtXMl",
"object": "chat.completion",
"created": 1762343744,
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I'm just a program, so I don't have feelings, but I'm here and ready to help you. How can I assist you today?",
"refusal": null,
"annotations": null,
"audio": null,
"tool_calls": null
},
"finish_reason": "stop",
"logprobs": null
}
],
"model": "google/gemini-2.5-flash-lite-preview",
"usage": {
"prompt_tokens": 137,
"completion_tokens": 914,
"total_tokens": 1051,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"meta": {
"usage": {
"credits_used": 120000,
"usd_spent": 0.06
}
}
}