eleven_turbo_v2_5
Model Overview
A high-quality text-to-speech model offering natural-sounding intonation, support for 31 languages, and a broad selection of built-in voices. Up to 3× faster than eleven_multilingual_v2. A wide range of output audio formats and quality settings is also available.
Setup your API Key
If you don’t have an API key for the AI/ML API yet, feel free to use our Quickstart guide.
API Schema
The text content to be converted to speech.
Name of the voice to be used
This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped.
The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation.
The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation.
Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed.
POST /v1/tts HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 286
{
"model": "elevenlabs/eleven_turbo_v2_5",
"text": "text",
"voice": "Rachel",
"apply_text_normalization": "auto",
"next_text": "text",
"previous_text": "text",
"output_format": "mp3_22050_32",
"voice_settings": {
"stability": 1,
"use_speaker_boost": true,
"similarity_boost": 1,
"style": 1,
"speed": 1
},
"seed": 1
}
{
"metadata": {
"transaction_key": "text",
"request_id": "text",
"sha256": "text",
"created": "2025-08-22T12:02:57.138Z",
"duration": 1,
"channels": 1,
"models": [
"text"
],
"model_info": {
"ANY_ADDITIONAL_PROPERTY": {
"name": "text",
"version": "text",
"arch": "text"
}
}
}
}
Code Example
import os
import requests
def main():
url = "https://api.aimlapi.com/v1/tts"
headers = {
# Insert your AI/ML API key instead of <YOUR_AIMLAPI_KEY>:
"Authorization": "Bearer <YOUR_AIMLAPI_KEY>",
}
payload = {
"model": "elevenlabs/eleven_turbo_v2_5",
"text": '''
Cities of the future promise to radically transform how people live, work, and move.
Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems.
Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic.
''',
"voice": "Nicole"
}
response = requests.post(url, headers=headers, json=payload, stream=True)
# result = os.path.join(os.path.dirname(__file__), "audio.wav") # if you run this code as a .py file
result = "audio.wav" # if you run this code in Jupyter Notebook
with open(result, "wb") as write_stream:
for chunk in response.iter_content(chunk_size=8192):
if chunk:
write_stream.write(chunk)
print("Audio saved to:", result)
main()
Last updated
Was this helpful?