# API REFERENCES

- [Service Endpoints](/api-references/service-endpoints.md)
- [Account Balance](/api-references/service-endpoints/account-balance.md)
- [API Key Management](/api-references/service-endpoints/api-key-management.md)
- [Complete Model List](/api-references/service-endpoints/complete-model-list.md)
- [All Model IDs](/api-references/model-database.md): A full list of available models.
- [Text Models (LLM)](/api-references/text-models-llm.md): Overview of the capabilities of AIML API text models (LLMs).
- [Alibaba Cloud](/api-references/text-models-llm/alibaba-cloud.md)
- [qwen-max](/api-references/text-models-llm/alibaba-cloud/qwen-max.md)
- [qwen-plus](/api-references/text-models-llm/alibaba-cloud/qwen-plus.md)
- [qwen-turbo](/api-references/text-models-llm/alibaba-cloud/qwen-turbo.md)
- [Qwen2.5-7B-Instruct-Turbo](/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo.md)
- [qwen3-32b](/api-references/text-models-llm/alibaba-cloud/qwen3-32b.md)
- [qwen3-coder-480b-a35b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct.md)
- [qwen3-235b-a22b-thinking-2507](/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b-thinking-2507.md)
- [qwen3-next-80b-a3b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-instruct.md)
- [qwen3-next-80b-a3b-thinking](/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-thinking.md)
- [qwen3-max-preview](/api-references/text-models-llm/alibaba-cloud/qwen3-max-preview.md)
- [qwen3-max-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct.md)
- [qwen3-omni-30b-a3b-captioner](/api-references/text-models-llm/alibaba-cloud/qwen3-omni-30b-a3b-captioner.md)
- [qwen3-vl-32b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct.md)
- [qwen3-vl-32b-thinking](/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking.md)
- [qwen3.5-plus](/api-references/text-models-llm/alibaba-cloud/qwen3.5-plus.md)
- [Anthracite](/api-references/text-models-llm/anthracite.md)
- [magnum-v4](/api-references/text-models-llm/anthracite/magnum-v4.md)
- [Anthropic](/api-references/text-models-llm/anthropic.md)
- [Claude 3 Haiku](/api-references/text-models-llm/anthropic/claude-3-haiku.md)
- [Claude 4 Opus](/api-references/text-models-llm/anthropic/claude-4-opus.md)
- [Claude 4 Sonnet](/api-references/text-models-llm/anthropic/claude-4-sonnet.md)
- [Claude 4.1 Opus](/api-references/text-models-llm/anthropic/claude-opus-4.1.md)
- [Claude 4.5 Sonnet](/api-references/text-models-llm/anthropic/claude-4-5-sonnet.md)
- [Claude 4.5 Haiku](/api-references/text-models-llm/anthropic/claude-4.5-haiku.md)
- [Claude 4.5 Opus](/api-references/text-models-llm/anthropic/claude-4.5-opus.md)
- [Claude 4.6 Opus](/api-references/text-models-llm/anthropic/claude-4.6-opus.md)
- [Claude 4.6 Sonnet](/api-references/text-models-llm/anthropic/claude-4.6-sonnet.md)
- [Claude 4.7 Opus](/api-references/text-models-llm/anthropic/claude-4.7-opus.md)
- [Baidu](/api-references/text-models-llm/baidu.md)
- [ernie-4.5-8k-preview](/api-references/text-models-llm/baidu/ernie-4.5-8k-preview.md)
- [ernie-4.5-0.3b](/api-references/text-models-llm/baidu/ernie-4.5-0.3b.md)
- [ernie-4.5-21b-a3b](/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b.md)
- [ernie-4.5-21b-a3b-thinking](/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b-thinking.md)
- [ernie-4.5-vl-28b-a3b](/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b.md)
- [ernie-4.5-vl-424b-a47b](/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b.md)
- [ernie-4.5-300b-a47b](/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b.md)
- [ernie-4.5-300b-a47b-paddle](/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b-paddle.md)
- [ernie-4.5-turbo-128k](/api-references/text-models-llm/baidu/ernie-4.5-turbo-128k.md)
- [ernie-4.5-turbo-vl-32k](/api-references/text-models-llm/baidu/ernie-4.5-turbo-vl-32k.md)
- [ernie-5.0-thinking-preview](/api-references/text-models-llm/baidu/ernie-5.0-thinking-preview.md)
- [ernie-5.0-thinking-latest](/api-references/text-models-llm/baidu/ernie-5.0-thinking-latest.md)
- [ernie-x1-turbo-32k](/api-references/text-models-llm/baidu/ernie-x1-turbo-32k.md)
- [ernie-x1.1-preview](/api-references/text-models-llm/baidu/ernie-x1.1-preview.md)
- [ByteDance](/api-references/text-models-llm/bytedance.md)
- [Seed 1.8](/api-references/text-models-llm/bytedance/seed-1.8.md)
- [Cohere](/api-references/text-models-llm/cohere.md)
- [command-a](/api-references/text-models-llm/cohere/command-a.md)
- [DeepSeek](/api-references/text-models-llm/deepseek.md)
- [DeepSeek V3](/api-references/text-models-llm/deepseek/deepseek-chat.md)
- [DeepSeek R1](/api-references/text-models-llm/deepseek/deepseek-r1.md)
- [DeepSeek Chat V3.1](/api-references/text-models-llm/deepseek/deepseek-chat-v3.1.md)
- [DeepSeek Reasoner V3.1](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1.md)
- [Deepseek Non-reasoner V3.1 Terminus](/api-references/text-models-llm/deepseek/deepseek-non-reasoner-v3.1-terminus.md)
- [Deepseek Reasoner V3.1 Terminus](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1-terminus.md)
- [DeepSeek V3.2 Exp Non-thinking](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-non-thinking.md)
- [DeepSeek V3.2 Exp Thinking](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-thinking.md)
- [DeepSeek V3.2 Speciale](/api-references/text-models-llm/deepseek/deepseek-v3.2-speciale.md)
- [Google](/api-references/text-models-llm/google.md)
- [gemini-2.0-flash](/api-references/text-models-llm/google/gemini-2.0-flash.md)
- [gemini-2.5-flash-lite-preview](/api-references/text-models-llm/google/gemini-2.5-flash-lite-preview.md)
- [gemini-2.5-flash](/api-references/text-models-llm/google/gemini-2.5-flash.md)
- [gemini-2.5-pro](/api-references/text-models-llm/google/gemini-2.5-pro.md)
- [gemma-3 (4B and 12B)](/api-references/text-models-llm/google/gemma-3.md)
- [gemma-3 (27B)](/api-references/text-models-llm/google/gemma-3-27b.md)
- [gemma-3n-4b](/api-references/text-models-llm/google/gemma-3n-4b.md)
- [gemini-3-flash-preview](/api-references/text-models-llm/google/gemini-3-flash-preview.md)
- [gemini-3-1-pro-preview](/api-references/text-models-llm/google/gemini-3-1-pro-preview.md)
- [gemini-3-1-flash-lite-preview](/api-references/text-models-llm/google/gemini-3-1-flash-lite-preview.md)
- [gemma-4-31b-it](/api-references/text-models-llm/google/gemma-4-31b-it.md)
- [Gryphe](/api-references/text-models-llm/gryphe.md)
- [MythoMax L2 (13B)](/api-references/text-models-llm/gryphe/mythomax-l2-13b.md)
- [Meta](/api-references/text-models-llm/meta.md)
- [Llama-3-8B-Instruct-Lite](/api-references/text-models-llm/meta/meta-llama-3-8b-instruct-lite.md)
- [Llama-3.3-70B-Instruct-Turbo](/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo.md)
- [Llama-3.3-70B-Versatile](/api-references/text-models-llm/meta/llama-3.3-70b-versatile.md)
- [MiniMax](/api-references/text-models-llm/minimax.md)
- [Text-01](/api-references/text-models-llm/minimax/text-01.md)
- [M1](/api-references/text-models-llm/minimax/m1.md)
- [M2](/api-references/text-models-llm/minimax/m2.md)
- [M2-her](/api-references/text-models-llm/minimax/m2-her.md)
- [M2.1](/api-references/text-models-llm/minimax/m2-1.md)
- [M2.1-highspeed](/api-references/text-models-llm/minimax/m2.1-highspeed.md)
- [M2.5](/api-references/text-models-llm/minimax/m2-5.md)
- [M2.5-highspeed](/api-references/text-models-llm/minimax/m2-5-highspeed.md)
- [M2.7](/api-references/text-models-llm/minimax/m2-7.md)
- [M2.7-highspeed](/api-references/text-models-llm/minimax/m2.7-highspeed.md)
- [Mistral AI](/api-references/text-models-llm/mistral-ai.md)
- [mistral-nemo](/api-references/text-models-llm/mistral-ai/mistral-nemo.md)
- [Mixtral-8x7B-Instruct](/api-references/text-models-llm/mistral-ai/mixtral-8x7b-instruct-v0.1.md)
- [Moonshot](/api-references/text-models-llm/moonshot.md)
- [kimi-k2-preview](/api-references/text-models-llm/moonshot/kimi-k2-preview.md)
- [kimi-k2-turbo-preview](/api-references/text-models-llm/moonshot/kimi-k2-turbo-preview.md)
- [kimi-k2-5](/api-references/text-models-llm/moonshot/kimi-k2-5.md)
- [kimi-k2-6](/api-references/text-models-llm/moonshot/kimi-k2-6.md)
- [NousResearch](/api-references/text-models-llm/nousresearch.md)
- [hermes-4-405b](/api-references/text-models-llm/nousresearch/hermes-4-405b.md)
- [NVIDIA](/api-references/text-models-llm/nvidia.md)
- [llama-3.1-nemotron-70b](/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b.md)
- [nemotron-nano-9b-v2](/api-references/text-models-llm/nvidia/nemotron-nano-9b-v2.md)
- [nemotron-nano-12b-v2-vl](/api-references/text-models-llm/nvidia/nemotron-nano-12b-v2-vl.md)
- [OpenAI](/api-references/text-models-llm/openai.md)
- [gpt-3.5-turbo](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
- [gpt-4](/api-references/text-models-llm/openai/gpt-4.md)
- [gpt-4-preview](/api-references/text-models-llm/openai/gpt-4-preview.md)
- [gpt-4-turbo](/api-references/text-models-llm/openai/gpt-4-turbo.md)
- [gpt-4o](/api-references/text-models-llm/openai/gpt-4o.md)
- [gpt-4o-mini](/api-references/text-models-llm/openai/gpt-4o-mini.md)
- [gpt-4o-audio-preview](/api-references/text-models-llm/openai/gpt-4o-audio-preview.md)
- [gpt-4o-mini-audio-preview](/api-references/text-models-llm/openai/gpt-4o-mini-audio-preview.md)
- [gpt-4o-search-preview](/api-references/text-models-llm/openai/gpt-4o-search-preview.md)
- [gpt-4o-mini-search-preview](/api-references/text-models-llm/openai/gpt-4o-mini-search-preview.md)
- [o1](/api-references/text-models-llm/openai/o1.md)
- [o3](/api-references/text-models-llm/openai/o3.md)
- [o3-mini](/api-references/text-models-llm/openai/o3-mini.md)
- [o3-pro](/api-references/text-models-llm/openai/o3-pro.md)
- [gpt-4.1](/api-references/text-models-llm/openai/gpt-4.1.md)
- [gpt-4.1-mini](/api-references/text-models-llm/openai/gpt-4.1-mini.md)
- [gpt-4.1-nano](/api-references/text-models-llm/openai/gpt-4.1-nano.md)
- [o4-mini](/api-references/text-models-llm/openai/o4-mini.md)
- [gpt-oss-20b](/api-references/text-models-llm/openai/gpt-oss-20b.md)
- [gpt-oss-120b](/api-references/text-models-llm/openai/gpt-oss-120b.md)
- [gpt-5](/api-references/text-models-llm/openai/gpt-5.md)
- [gpt-5-mini](/api-references/text-models-llm/openai/gpt-5-mini.md)
- [gpt-5-nano](/api-references/text-models-llm/openai/gpt-5-nano.md)
- [gpt-5-chat](/api-references/text-models-llm/openai/gpt-5-chat.md)
- [gpt-5-pro](/api-references/text-models-llm/openai/gpt-5-pro.md)
- [gpt-5.1](/api-references/text-models-llm/openai/gpt-5-1.md)
- [gpt-5.1-chat-latest](/api-references/text-models-llm/openai/gpt-5-1-chat-latest.md)
- [gpt-5.1-codex](/api-references/text-models-llm/openai/gpt-5-1-codex.md)
- [gpt-5.1-codex-mini](/api-references/text-models-llm/openai/gpt-5-1-codex-mini.md)
- [gpt-5.2](/api-references/text-models-llm/openai/gpt-5.2.md)
- [gpt-5.2-chat-latest](/api-references/text-models-llm/openai/gpt-5.2-chat-latest.md)
- [gpt-5.2-pro](/api-references/text-models-llm/openai/gpt-5.2-pro.md)
- [gpt-5.2-codex](/api-references/text-models-llm/openai/gpt-5.2-codex.md)
- [gpt-5.3-codex](/api-references/text-models-llm/openai/gpt-5.3-codex.md)
- [gpt-5-4](/api-references/text-models-llm/openai/gpt-5-4.md)
- [gpt-5-4-pro](/api-references/text-models-llm/openai/gpt-5-4-pro.md)
- [Perplexity](/api-references/text-models-llm/perplexity.md)
- [sonar](/api-references/text-models-llm/perplexity/sonar.md)
- [sonar-pro](/api-references/text-models-llm/perplexity/sonar-pro.md)
- [xAI](/api-references/text-models-llm/xai.md)
- [grok-3-beta](/api-references/text-models-llm/xai/grok-3-beta.md)
- [grok-3-mini-beta](/api-references/text-models-llm/xai/grok-3-mini-beta.md)
- [grok-4](/api-references/text-models-llm/xai/grok-4.md)
- [grok-code-fast-1](/api-references/text-models-llm/xai/grok-code-fast-1.md)
- [grok-4-fast-non-reasoning](/api-references/text-models-llm/xai/grok-4-fast-non-reasoning.md)
- [grok-4-fast-reasoning](/api-references/text-models-llm/xai/grok-4-fast-reasoning.md)
- [grok-4.1-fast-non-reasoning](/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning.md)
- [grok-4.1-fast-reasoning](/api-references/text-models-llm/xai/grok-4-1-fast-reasoning.md)
- [grok-4.20-non-reasoning](/api-references/text-models-llm/xai/grok-4-20-non-reasoning.md)
- [grok-4.20-reasoning](/api-references/text-models-llm/xai/grok-4-20-reasoning.md)
- [Zhipu](/api-references/text-models-llm/zhipu.md)
- [glm-4.5-air](/api-references/text-models-llm/zhipu/glm-4.5-air.md)
- [glm-4.5](/api-references/text-models-llm/zhipu/glm-4.5.md)
- [glm-4.6](/api-references/text-models-llm/zhipu/glm-4.6.md)
- [glm-4.7](/api-references/text-models-llm/zhipu/glm-4.7.md)
- [glm-5](/api-references/text-models-llm/zhipu/glm-5.md)
- [glm-5.1](/api-references/text-models-llm/zhipu/glm-5.1.md)
- [Image Models](/api-references/image-models.md): A description of image generation process using AIML API image models.
- [Alibaba Cloud](/api-references/image-models/alibaba-cloud.md)
- [qwen-image](/api-references/image-models/alibaba-cloud/qwen-image.md)
- [qwen-image-edit](/api-references/image-models/alibaba-cloud/qwen-image-edit.md)
- [z-image-turbo](/api-references/image-models/alibaba-cloud/z-image-turbo.md)
- [z-image-turbo-lora](/api-references/image-models/alibaba-cloud/z-image-turbo-lora.md)
- [wan2.2-t2i-plus](/api-references/image-models/alibaba-cloud/wan2.2-t2i-plus.md)
- [wan2.2-t2i-flash](/api-references/image-models/alibaba-cloud/wan2.2-t2i-flash.md)
- [wan2.5-t2i-preview](/api-references/image-models/alibaba-cloud/wan2.5-t2i-preview.md)
- [wan2.6-image](/api-references/image-models/alibaba-cloud/wan-2-6-image.md)
- [wan2.7-image](/api-references/image-models/alibaba-cloud/wan2.7-image.md)
- [wan2.7-image-pro](/api-references/image-models/alibaba-cloud/wan2.7-image-pro.md)
- [ByteDance](/api-references/image-models/bytedance.md)
- [Seedream 3.0](/api-references/image-models/bytedance/seedream-3.0.md)
- [Seedream 4.0 (Text-to-Image)](/api-references/image-models/bytedance/seedream-v4-text-to-image.md)
- [Seedream 4.0 Edit (Image-to-image)](/api-references/image-models/bytedance/seedream-v4-edit-image-to-image.md)
- [USO (Image-to-Image)](/api-references/image-models/bytedance/uso.md)
- [Seedream 4.5](/api-references/image-models/bytedance/seedream-4-5.md)
- [Seedream 5.0 Lite Preview](/api-references/image-models/bytedance/seedream-5.0-lite-preview.md)
- [Flux](/api-references/image-models/flux.md)
- [flux-pro](/api-references/image-models/flux/flux-pro.md)
- [flux-pro/v1.1](/api-references/image-models/flux/flux-pro-v1.1.md)
- [flux-pro/v1.1-ultra](/api-references/image-models/flux/flux-pro-v1.1-ultra.md)
- [flux-realism](/api-references/image-models/flux/flux-realism.md)
- [flux/dev](/api-references/image-models/flux/flux-dev.md)
- [flux/dev/image-to-image](/api-references/image-models/flux/flux-dev-image-to-image.md)
- [flux/schnell](/api-references/image-models/flux/flux-schnell.md)
- [flux/kontext-max/text-to-image](/api-references/image-models/flux/flux-kontext-max-text-to-image.md)
- [flux/kontext-max/image-to-image](/api-references/image-models/flux/flux-kontext-max-image-to-image.md)
- [flux/kontext-pro/text-to-image](/api-references/image-models/flux/flux-kontext-pro-text-to-image.md)
- [flux/kontext-pro/image-to-image](/api-references/image-models/flux/flux-kontext-pro-image-to-image.md)
- [flux/srpo/text-to-image](/api-references/image-models/flux/flux-srpo-text-to-image.md)
- [flux/srpo/image-to-image](/api-references/image-models/flux/flux-srpo-image-to-image.md)
- [flux-2](/api-references/image-models/flux/flux-2.md)
- [flux-2-edit](/api-references/image-models/flux/flux-2-edit.md)
- [flux-2-lora](/api-references/image-models/flux/flux-2-lora.md)
- [flux-2-lora-edit](/api-references/image-models/flux/flux-2-lora-edit.md)
- [flux-2-pro](/api-references/image-models/flux/flux-2-pro.md)
- [flux-2-pro-edit](/api-references/image-models/flux/flux-2-pro-edit.md)
- [Google](/api-references/image-models/google.md)
- [Imagen 3](/api-references/image-models/google/imagen-3.0.md)
- [Imagen 4 Preview](/api-references/image-models/google/imagen-4-preview.md)
- [Imagen 4 Generate](/api-references/image-models/google/imagen-4-generate.md)
- [Imagen 4 Fast Generate](/api-references/image-models/google/imagen-4-fast-generate.md)
- [Imagen 4 Ultra Generate](/api-references/image-models/google/imagen-4-ultra-generate.md)
- [Gemini 2.5 Flash Image (Nano Banana)](/api-references/image-models/google/gemini-2.5-flash-image.md)
- [Gemini 2.5 Flash Image Edit (Nano Banana)](/api-references/image-models/google/gemini-2.5-flash-image-edit.md)
- [Nano Banana Pro (Gemini 3 Pro Image)](/api-references/image-models/google/gemini-3-pro-image-preview.md)
- [Nano Banana Pro Edit (Gemini 3 Pro Image Edit)](/api-references/image-models/google/gemini-3-pro-image-preview-edit.md)
- [Nano Banana 2 (Gemini 3.1 Flash Image)](/api-references/image-models/google/gemini-3.1-flash-image.md)
- [Kling AI](/api-references/image-models/kling-ai.md)
- [image-o1](/api-references/image-models/kling-ai/image-o1.md)
- [OpenAI](/api-references/image-models/openai.md)
- [DALL·E 2](/api-references/image-models/openai/dall-e-2.md)
- [DALL·E 3](/api-references/image-models/openai/dall-e-3.md)
- [gpt-image-1](/api-references/image-models/openai/gpt-image-1.md)
- [gpt-image-1-mini](/api-references/image-models/openai/gpt-image-1-mini.md)
- [gpt-image-1-5](/api-references/image-models/openai/gpt-image-1-5.md)
- [RecraftAI](/api-references/image-models/recraftai.md)
- [Recraft v3](/api-references/image-models/recraftai/recraft-v3.md)
- [Reve](/api-references/image-models/reve.md)
- [reve/create-image](/api-references/image-models/reve/reve-create-image.md)
- [reve/edit-image](/api-references/image-models/reve/reve-edit-image.md)
- [reve/remix-edit-image](/api-references/image-models/reve/reve-remix-edit-image.md)
- [Stability AI](/api-references/image-models/stability-ai.md)
- [Stable Diffusion v3 Medium](/api-references/image-models/stability-ai/stable-diffusion-v3-medium.md)
- [Stable Diffusion v3.5 Large](/api-references/image-models/stability-ai/stable-diffusion-v35-large.md)
- [Tencent](/api-references/image-models/tencent.md)
- [Hunyuan Image v3](/api-references/image-models/tencent/hunyuan-image-v3-text-to-image.md)
- [Topaz Labs](/api-references/image-models/topaz-labs.md)
- [Sharpen](/api-references/image-models/topaz-labs/sharpen.md)
- [Sharpen Generative](/api-references/image-models/topaz-labs/sharpen-generative.md)
- [xAI](/api-references/image-models/xai.md)
- [Grok Imagine Image](/api-references/image-models/xai/grok-imagine-image.md)
- [Grok Imagine Image Pro](/api-references/image-models/xai/grok-imagine-image-pro.md)
- [Video Models](/api-references/video-models.md): Short overview of the available video model providers.
- [Alibaba Cloud](/api-references/video-models/alibaba-cloud.md)
- [Wan 2.1 Plus (Text-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.1-plus-text-to-video.md)
- [Wan 2.1 Turbo (Text-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.1-turbo-text-to-video.md)
- [Wan 2.2 Plus (Text-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.2-plus-text-to-video.md)
- [Wan 2.2 Animate Replace (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.2-14b-animate-replace-image-to-video.md)
- [Wan 2.2 Animate Move (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.2-14b-animate-move-image-to-video.md)
- [Wan 2.2 VACE Fun Reframe (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-reframe-image-to-video.md)
- [Wan 2.2 VACE Fun Outpainting (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-outpainting-image-to-video.md)
- [Wan 2.2 VACE Fun Inpainting (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-inpainting-image-to-video.md)
- [Wan 2.2 VACE Fun Pose (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-pose-image-to-video.md)
- [Wan 2.2 VACE Fun Depth (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-depth-image-to-video.md)
- [Wan 2.5 Preview (Text-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.5-preview-text-to-video.md)
- [Wan 2.5 Preview (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.5-preview-image-to-video.md)
- [Wan 2.6 (Text-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.6-text-to-video.md)
- [Wan 2.6 (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.6-image-to-video.md)
- [Wan 2.6 (Reference-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.6-reference-to-video.md)
- [Wan 2.6 Flash (Image-to-Video)](/api-references/video-models/alibaba-cloud/wan-2.6-flash-image-to-video.md)
- [ByteDance](/api-references/video-models/bytedance.md)
- [Seedance 1.0 Lite (Text-to-Video)](/api-references/video-models/bytedance/seedance-1.0-lite-text-to-video.md)
- [Seedance 1.0 Lite (Image-to-Video)](/api-references/video-models/bytedance/seedance-1.0-lite-image-to-video.md)
- [Seedance 1.0 Pro (Text-to-Video)](/api-references/video-models/bytedance/seedance-1.0-pro-text-to-video.md)
- [Seedance 1.0 Pro (Image-to-Video)](/api-references/video-models/bytedance/seedance-1.0-pro-image-to-video.md): Coming Soon
- [Seedance 1.0 Pro Fast](/api-references/video-models/bytedance/seedance-1.0-pro-fast.md)
- [OmniHuman](/api-references/video-models/bytedance/omnihuman.md)
- [OmniHuman 1.5](/api-references/video-models/bytedance/omnihuman-1.5.md)
- [Seedance 1.5 Pro](/api-references/video-models/bytedance/seedance-1.5-pro.md)
- [Seedance 2.0](/api-references/video-models/bytedance/seedance-2.0.md)
- [Seedance 2.0 Fast](/api-references/video-models/bytedance/seedance-2.0-fast.md)
- [Google](/api-references/video-models/google.md)
- [Veo 2 (Text-to-Video)](/api-references/video-models/google/veo2-text-to-video.md)
- [Veo 2 (Image-to-Video)](/api-references/video-models/google/veo2-image-to-video.md)
- [Veo 3 (Text-to-Video)](/api-references/video-models/google/veo3-text-to-video.md)
- [Veo 3 (Image-to-Video)](/api-references/video-models/google/veo-3-image-to-video.md)
- [Veo 3 Fast (Text-to-Video)](/api-references/video-models/google/veo-3-fast-text-to-video.md)
- [Veo 3 Fast (Image-to-Video)](/api-references/video-models/google/veo-3-fast-image-to-video.md)
- [Veo 3.1 (Text-to-Video)](/api-references/video-models/google/veo-3-1-text-to-video.md)
- [Veo 3.1 (Image-to-Video)](/api-references/video-models/google/veo-3-1-image-to-video.md)
- [Veo 3.1 (First-Last-Image-to-Video)](/api-references/video-models/google/veo-3-1-first-last-image-to-video.md)
- [Veo 3.1 (Reference-to-Video)](/api-references/video-models/google/veo-3-1-reference-to-video.md)
- [Veo 3.1 Fast (Text-to-Video)](/api-references/video-models/google/veo-3-1-text-to-video-fast.md)
- [Veo 3.1 Fast (Image-to-Video)](/api-references/video-models/google/veo-3-1-image-to-video-fast.md)
- [Veo 3.1 Fast (First-Last-Image-to-Video)](/api-references/video-models/google/veo-3-1-first-last-image-to-video-fast.md)
- [Veo 3.1 Extend Video](/api-references/video-models/google/veo-3.1-extend-video.md)
- [Veo 3.1 Fast Extend Video](/api-references/video-models/google/veo-3.1-fast-extend-video.md)
- [Veo 3.1 Lite Generate Preview](/api-references/video-models/google/veo-3.1-lite-generate-preview.md)
- [Kling AI](/api-references/video-models/kling-ai.md)
- [v1-standard/text-to-video](/api-references/video-models/kling-ai/v1-standard-text-to-video.md)
- [v1-standard/image-to-video](/api-references/video-models/kling-ai/v1-standard-image-to-video.md)
- [v1-pro/text-to-video](/api-references/video-models/kling-ai/v1-pro-text-to-video.md)
- [v1-pro/image-to-video](/api-references/video-models/kling-ai/v1-pro-image-to-video.md)
- [v1.6-standard/text-to-video](/api-references/video-models/kling-ai/v1.6-standard-text-to-video.md)
- [v1.6-standard/image-to-video](/api-references/video-models/kling-ai/v1.6-standart-image-to-video.md)
- [v1.6-standard/multi-image-to-video](/api-references/video-models/kling-ai/v1.6-standard-multi-image-to-video.md)
- [v1.6-pro/text-to-video](/api-references/video-models/kling-ai/v1.6-pro-text-to-video.md)
- [v1.6-pro/image-to-video](/api-references/video-models/kling-ai/v1.6-pro-image-to-video.md)
- [v1.6-standard/effects](/api-references/video-models/kling-ai/v1.6-standard-effects.md)
- [v1.6-pro/effects](/api-references/video-models/kling-ai/v1.6-pro-effects.md)
- [v2-master/text-to-video](/api-references/video-models/kling-ai/v2-master-text-to-video.md)
- [v2-master/image-to-video](/api-references/video-models/kling-ai/v2-master-image-to-video.md)
- [v2.1-standard/image-to-video](/api-references/video-models/kling-ai/v2.1-standard-image-to-video.md)
- [v2.1-pro/image-to-video](/api-references/video-models/kling-ai/v2.1-pro-image-to-video.md)
- [v2.1-master/text-to-video](/api-references/video-models/kling-ai/v2.1-master-text-to-video.md)
- [v2.1-master/image-to-video](/api-references/video-models/kling-ai/v2.1-master-image-to-video.md)
- [v2.5-turbo/pro/text-to-video](/api-references/video-models/kling-ai/v2.5-turbo-pro-text-to-video.md)
- [v2.5-turbo/pro/image-to-video](/api-references/video-models/kling-ai/v2.5-turbo-pro-image-to-video.md)
- [avatar-standard](/api-references/video-models/kling-ai/avatar-standard.md)
- [avatar-pro](/api-references/video-models/kling-ai/avatar-pro.md)
- [v2.6-pro/text-to-video](/api-references/video-models/kling-ai/video-v2-6-pro-text-to-video.md)
- [v2.6-pro/image-to-video](/api-references/video-models/kling-ai/video-v2.6-pro-image-to-video.md)
- [o1/image-to-video](/api-references/video-models/kling-ai/video-o1-image-to-video.md)
- [o1/reference-to-video](/api-references/video-models/kling-ai/video-o1-reference-to-video.md)
- [o1/video-to-video/edit](/api-references/video-models/kling-ai/video-o1-video-to-video-edit.md)
- [o1/video-to-video-reference](/api-references/video-models/kling-ai/video-o1-video-to-video-reference.md)
- [v2.6-pro/motion-control](/api-references/video-models/kling-ai/video-v2.6-pro-motion-control.md)
- [v3-standard/text-to-video](/api-references/video-models/kling-ai/v3-standard-text-to-video.md)
- [v3-standard/image-to-video](/api-references/video-models/kling-ai/v3-standard-image-to-video.md)
- [v3-pro/text-to-video](/api-references/video-models/kling-ai/v3-pro-text-to-video.md)
- [v3-pro/image-to-video](/api-references/video-models/kling-ai/v3-pro-image-to-video.md)
- [Krea](/api-references/video-models/krea.md)
- [krea-wan-14b/text-to-video](/api-references/video-models/krea/krea-wan-14b-text-to-video.md)
- [krea-wan-14b/video-to-video](/api-references/video-models/krea/krea-wan-14b-video-to-video.md)
- [LTXV](/api-references/video-models/ltxv.md)
- [ltxv-2](/api-references/video-models/ltxv/ltxv-2.md)
- [ltxv-2-fast](/api-references/video-models/ltxv/ltxv-2-fast.md)
- [Luma AI](/api-references/video-models/luma-ai.md)
- [Luma Ray 2](/api-references/video-models/luma-ai/luma-ray-2.md)
- [Luma Ray Flash 2](/api-references/video-models/luma-ai/luma-ray-flash-2.md)
- [Magic](/api-references/video-models/magic.md)
- [magic/text-to-video](/api-references/video-models/magic/text-to-video.md)
- [magic/image-to-video](/api-references/video-models/magic/image-to-video.md)
- [magic/video-to-video](/api-references/video-models/magic/video-to-video.md)
- [MiniMax](/api-references/video-models/minimax.md)
- [video-01](/api-references/video-models/minimax/video-01.md)
- [video-01-live2d](/api-references/video-models/minimax/video-01-live2d.md)
- [hailuo-02](/api-references/video-models/minimax/hailuo-02.md)
- [hailuo-2.3](/api-references/video-models/minimax/hailuo-2.3.md)
- [hailuo-2.3-fast](/api-references/video-models/minimax/hailuo-2.3-fast.md)
- [OpenAI](/api-references/video-models/openai.md)
- [sora-2-t2v](/api-references/video-models/openai/sora-2-t2v.md)
- [sora-2-i2v](/api-references/video-models/openai/sora-2-i2v.md)
- [sora-2-pro-t2v](/api-references/video-models/openai/sora-2-pro-t2v.md)
- [sora-2-pro-i2v](/api-references/video-models/openai/sora-2-pro-i2v.md)
- [PixVerse](/api-references/video-models/pixverse.md)
- [v5/text-to-video](/api-references/video-models/pixverse/v5-text-to-video.md)
- [v5/image-to-video](/api-references/video-models/pixverse/v5-image-to-video.md)
- [v5/transition](/api-references/video-models/pixverse/v5-transition.md)
- [v5.5/text-to-video](/api-references/video-models/pixverse/v5-5-text-to-video.md)
- [v5.5/image-to-video](/api-references/video-models/pixverse/v5-5-image-to-video.md)
- [lip-sync](/api-references/video-models/pixverse/lip-sync.md)
- [Runway](/api-references/video-models/runway.md)
- [gen3a\_turbo](/api-references/video-models/runway/gen3a_turbo.md): Description of the gen3a\_turbo model: Pricing, API Reference, Examples.
- [gen4\_turbo](/api-references/video-models/runway/gen4_turbo.md)
- [gen4\_aleph](/api-references/video-models/runway/gen4_aleph.md)
- [act\_two](/api-references/video-models/runway/act_two.md)
- [Sber AI](/api-references/video-models/sber-ai.md)
- [Kandinsky 5 (Text-to-Video)](/api-references/video-models/sber-ai/kandinsky5-text-to-video.md)
- [Kandinsky 5 Distill (Text-to-Video)](/api-references/video-models/sber-ai/kandinsky5-distill-text-to-video.md)
- [Tencent](/api-references/video-models/tencent.md)
- [hunyuan-video-foley](/api-references/video-models/tencent/hunyuan-video-foley.md)
- [VEED](/api-references/video-models/veed.md)
- [fabric-1.0](/api-references/video-models/veed/fabric-1.0.md)
- [fabric-1.0-fast](/api-references/video-models/veed/fabric-1.0-fast.md)
- [Music Models](/api-references/music-models.md): Overview of the capabilities of AIML API audio / music models.
- [ElevenLabs](/api-references/music-models/elevenlabs.md)
- [eleven\_music](/api-references/music-models/elevenlabs/eleven_music.md)
- [Google](/api-references/music-models/google.md)
- [Lyria 2](/api-references/music-models/google/lyria-2.md)
- [MiniMax](/api-references/music-models/minimax.md)
- [minimax-music \[legacy\]](/api-references/music-models/minimax/minimax-music-legacy.md)
- [music-01](/api-references/music-models/minimax/music-01.md)
- [music-1.5](/api-references/music-models/minimax/music-1.5.md)
- [music-2.0](/api-references/music-models/minimax/music-2.0.md)
- [music-2.6](/api-references/music-models/minimax/music-2-6.md)
- [Stability AI](/api-references/music-models/stability-ai.md)
- [stable-audio](/api-references/music-models/stability-ai/stable-audio.md)
- [Voice/Speech Models](/api-references/speech-models.md): Overview of the available speech model providers.
- [Speech-to-Text](/api-references/speech-models/speech-to-text.md)
- [stt \[legacy\]](/api-references/speech-models/speech-to-text/stt-legacy.md)
- [Assembly AI](/api-references/speech-models/speech-to-text/assembly-ai.md)
- [slam-1](/api-references/speech-models/speech-to-text/assembly-ai/slam-1.md)
- [universal](/api-references/speech-models/speech-to-text/assembly-ai/universal.md)
- [Deepgram](/api-references/speech-models/speech-to-text/deepgram.md)
- [nova-2](/api-references/speech-models/speech-to-text/deepgram/nova-2.md)
- [OpenAI](/api-references/speech-models/speech-to-text/openai.md)
- [whisper-base](/api-references/speech-models/speech-to-text/openai/whisper-base.md)
- [whisper-large](/api-references/speech-models/speech-to-text/openai/whisper-large.md)
- [whisper-medium](/api-references/speech-models/speech-to-text/openai/whisper-medium.md)
- [whisper-small](/api-references/speech-models/speech-to-text/openai/whisper-small.md)
- [whisper-tiny](/api-references/speech-models/speech-to-text/openai/whisper-tiny.md)
- [gpt-4o-transcribe](/api-references/speech-models/speech-to-text/openai/gpt-4o-transcribe.md)
- [gpt-4o-mini-transcribe](/api-references/speech-models/speech-to-text/openai/gpt-4o-mini-transcribe.md)
- [Text-to-Speech](/api-references/speech-models/text-to-speech.md): Overview of the capabilities of AIML API Text-to-Speech (TTS) models.
- [Alibaba Cloud](/api-references/speech-models/text-to-speech/alibaba-cloud.md)
- [qwen3-tts-flash](/api-references/speech-models/text-to-speech/alibaba-cloud/qwen3-tts-flash.md)
- [Deepgram](/api-references/speech-models/text-to-speech/deepgram.md)
- [aura](/api-references/speech-models/text-to-speech/deepgram/aura.md)
- [aura 2](/api-references/speech-models/text-to-speech/deepgram/aura-2.md)
- [ElevenLabs](/api-references/speech-models/text-to-speech/elevenlabs.md)
- [eleven\_multilingual\_v2](/api-references/speech-models/text-to-speech/elevenlabs/eleven_multilingual_v2.md)
- [eleven\_turbo\_v2\_5](/api-references/speech-models/text-to-speech/elevenlabs/eleven_turbo_v2_5.md)
- [Hume AI](/api-references/speech-models/text-to-speech/hume-ai.md)
- [octave-2](/api-references/speech-models/text-to-speech/hume-ai/octave-2.md)
- [Inworld](/api-references/speech-models/text-to-speech/inworld.md)
- [inworld/tts-1](/api-references/speech-models/text-to-speech/inworld/tts-1.md)
- [inworld/tts-1-max](/api-references/speech-models/text-to-speech/inworld/tts-1-max.md)
- [inworld/tts-1-5-mini](/api-references/speech-models/text-to-speech/inworld/tts-1-5-mini.md)
- [inworld/tts-1-5-max](/api-references/speech-models/text-to-speech/inworld/tts-1-5-max.md)
- [Microsoft](/api-references/speech-models/text-to-speech/microsoft.md)
- [vibevoice-1.5b](/api-references/speech-models/text-to-speech/microsoft/vibevoice-1.5b.md)
- [vibevoice-7b](/api-references/speech-models/text-to-speech/microsoft/vibevoice-7b.md)
- [OpenAI](/api-references/speech-models/text-to-speech/openai.md)
- [TTS-1](/api-references/speech-models/text-to-speech/openai/tts-1.md)
- [TTS-1 HD](/api-references/speech-models/text-to-speech/openai/tts-1-hd.md)
- [gpt-4o-mini-tts](/api-references/speech-models/text-to-speech/openai/gpt-4o-mini-tts.md)
- [Voice Chat](/api-references/speech-models/voice-chat.md)
- [ElevenLabs](/api-references/speech-models/voice-chat/elevenlabs.md)
- [v3\_alpha](/api-references/speech-models/voice-chat/elevenlabs/v3_alpha.md)
- [MiniMax](/api-references/speech-models/voice-chat/minimax.md)
- [Speech 2.5 Turbo Preview](/api-references/speech-models/voice-chat/minimax/speech-2.5-turbo-preview.md)
- [Speech 2.5 HD Preview](/api-references/speech-models/voice-chat/minimax/speech-2.5-hd-preview.md)
- [Speech 2.6 Turbo](/api-references/speech-models/voice-chat/minimax/speech-2.6-turbo.md)
- [Speech 2.6 HD](/api-references/speech-models/voice-chat/minimax/speech-2.6-hd.md)
- [Speech 2.8 Turbo](/api-references/speech-models/voice-chat/minimax/speech-2.8-turbo.md)
- [Speech 2.8 HD](/api-references/speech-models/voice-chat/minimax/speech-2.8-hd.md)
- [3D-Generating Models](/api-references/3d-generating-models.md): Overview of the capabilities of AIML API 3D-generating models.
- [Stability AI](/api-references/3d-generating-models/stability-ai.md)
- [triposr](/api-references/3d-generating-models/stability-ai/triposr.md)
- [Tencent](/api-references/3d-generating-models/tencent.md)
- [Hunyuan Part](/api-references/3d-generating-models/tencent/hunyuan-part.md)
- [Vision Models](/api-references/vision-models.md): Overview of the capabilities of AIML API vision models.
- [Image Analysis](/api-references/vision-models/image-analysis.md)
- [OCR: Optical Character Recognition](/api-references/vision-models/ocr-optical-character-recognition.md)
- [Google](/api-references/vision-models/ocr-optical-character-recognition/google.md)
- [Google OCR](/api-references/vision-models/ocr-optical-character-recognition/google/google-ocr.md)
- [Mistral AI](/api-references/vision-models/ocr-optical-character-recognition/mistral-ai.md)
- [mistral-ocr-latest](/api-references/vision-models/ocr-optical-character-recognition/mistral-ai/mistral-ocr-latest.md)
- [Zhipu](/api-references/vision-models/ocr-optical-character-recognition/zhipu.md)
- [glm-ocr](/api-references/vision-models/ocr-optical-character-recognition/zhipu/glm-ocr.md)
- [OFR: Optical Feature Recognition](/api-references/vision-models/ofr-optical-feature-recognition.md)
- [Embedding Models](/api-references/embedding-models.md)
- [Alibaba Cloud](/api-references/embedding-models/alibaba-cloud.md)
- [qwen-text-embedding-v3](/api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v3.md)
- [qwen-text-embedding-v4](/api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v4.md)
- [Anthropic](/api-references/embedding-models/anthropic.md)
- [voyage-2](/api-references/embedding-models/anthropic/voyage-2.md)
- [voyage-code-2](/api-references/embedding-models/anthropic/voyage-code-2.md)
- [voyage-finance-2](/api-references/embedding-models/anthropic/voyage-finance-2.md)
- [voyage-large-2](/api-references/embedding-models/anthropic/voyage-large-2.md)
- [voyage-large-2-instruct](/api-references/embedding-models/anthropic/voyage-large-2-instruct.md)
- [voyage-law-2](/api-references/embedding-models/anthropic/voyage-law-2.md)
- [voyage-multilingual-2](/api-references/embedding-models/anthropic/voyage-multilingual-2.md)
- [Google](/api-references/embedding-models/google.md)
- [text-multilingual-embedding-002](/api-references/embedding-models/google/text-multilingual-embedding-002.md)
- [OpenAI](/api-references/embedding-models/openai.md)
- [text-embedding-3-small](/api-references/embedding-models/openai/text-embedding-3-small.md)
- [text-embedding-3-large](/api-references/embedding-models/openai/text-embedding-3-large.md)
- [text-embedding-ada-002](/api-references/embedding-models/openai/text-embedding-ada-002.md)
