# Vision in Text Models

This article describes a specific capability of text models: vision, which enables image-to-text and video-to-text conversion. With vision support, models can interpret visual content and return structured or natural-language responses based on what they see.

Common use cases include describing images, analyzing screenshots, extracting text, understanding charts and documents, identifying objects, summarizing scenes, and processing video frames or clips.

The sections below explain how to work with image and video inputs, along with request examples and supported models.

## :island: Image analysis

<details>

<summary>Supported Model List</summary>

* [alibaba/qwen3-vl-32b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct)
* [alibaba/qwen3-vl-32b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking)
* [alibaba/qwen3.5-plus-20260218](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.5-plus)
* [alibaba/qwen3.5-omni-plus](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.5-omni-plus)
* [alibaba/qwen3.5-omni-flash](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.5-omni-flash)
* [alibaba/qwen3.6-27b](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.6-27b)
* [claude-sonnet-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet)
* [claude-opus-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus)
* [anthropic/claude-opus-4.1](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1)
* [anthropic/claude-sonnet-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet)
* [anthropic/claude-opus-4-5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus)
* [anthropic/claude-opus-4-6](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.6-opus)
* [anthropic/claude-sonnet-4.6](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.6-sonnet)
* [anthropic/claude-opus-4-7](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.7-opus)
* [baidu/ernie-4.5-vl-28b-a3b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b)
* [baidu/ernie-4.5-vl-424b-a47b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b)
* [baidu/ernie-4-5-turbo-vl-32k](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-turbo-vl-32k)
* [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash)
* [google/gemini-2.5-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash)
* [google/gemini-2.5-pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro)
* [google/gemma-3-4b-it](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3)
* [google/gemma-3-27b-it](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3)
* [google/gemini-3-1-pro-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-1-pro-preview)
* [google/gemini-3-1-flash-lite-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-1-flash-lite-preview)
* [google/gemma-4-31b-it](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-4-31b-it)
* [MiniMax-Text-01](https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01)
* [minimax/m2-her](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-her)
* [minimax/m2-1](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-1)
* [minimax/m2-1-highspeed](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2.1-highspeed)
* [minimax/m2-5-20260218](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-5)
* [minimax/m2-5-highspeed-20260218](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-5-highspeed)
* [minimax/m2-7-20260402](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-7)
* [minimax/m2-7-highspeed](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2.7-highspeed)
* [moonshot/kimi-k2-5](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-5)
* [moonshot/kimi-k2-6](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-6)
* [chatgpt-4o-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)
* [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo)
* [gpt-4-turbo-2024-04-09](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo)
* [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)
* [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)
* [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)
* [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini)
* [gpt-4o-mini-2024-07-18](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini)
* [openai/gpt-4.1-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1)
* [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini)
* [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano)
* [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini)
* [openai/o3-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3)
* [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1)
* [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5)
* [openai/gpt-5-mini-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini)
* [openai/gpt-5-nano-2025-08-0](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-nano)
* [openai/gpt-5-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat)
* ​[openai/gpt-5-1​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1)
* [​openai/gpt-5-1-chat-latest​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest)
* [​openai/gpt-5-1-codex​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex)
* [​openai/gpt-5-1-codex-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex-mini)
* [openai/gpt-5-2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2)
* [openai/gpt-5-2-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-chat-latest)
* [openai/gpt-5-2-codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-codex)
* [openai/gpt-5-4](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-4)
* [openai/gpt-5-4-pro](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-4-pro)
* [perplexity/sonar](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar)
* [perplexity/sonar-pro](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar-pro)
* [x-ai/grok-4-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-non-reasoning)
* [x-ai/grok-4-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-reasoning)
* [x-ai/grok-4-1-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning)
* [x-ai/grok-4-1-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-reasoning)
* [x-ai/grok-4-20-0309-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-20-non-reasoning)
* [x-ai/grok-4-20-0309-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-20-reasoning)

</details>

{% tabs %}
{% tab title="Python" %}
{% code overflow="wrap" %}

```python
import requests
import json   # for getting a structured output with indentation

response = requests.post(
    url = "https://api.aimlapi.com/v1/chat/completions",
    headers = {
        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:  
        "Authorization": "Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type": "application/json"
    },

    json = {
        "model": "alibaba/qwen3.5-omni-flash",
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "Describe the content of this image."
                    },
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/handwriting.jpg"
                        }
                    }
                ]
            }
        ]
    }
)

data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))
```

{% endcode %}
{% endtab %}
{% endtabs %}

<details>

<summary>Response</summary>

{% code overflow="wrap" %}

```json
{
  "choices": [
    {
      "message": {
        "content": "This image shows a piece of lined notebook paper with handwritten text in black ink. The handwriting is neat, flowing, and appears to be written with a fountain pen — as noted in the text itself.\n\nThe content reads:\n\n> This is a handwriting test to see how it looks on lined paper. For the past two weeks I have been trying to improve my writing along with learning how to write with fountain pens. If you have any suggestions, tips or free resources I would love to check it out. Hope everyone is having a good day. :)\n\nAt the end, there’s a simple smiley face drawn with two dots for eyes and a curved line for a mouth.\n\nThe writer is sharing their progress in improving their handwriting while also learning to use fountain pens, and they’re politely asking for advice or recommendations from others. The tone is friendly and open-ended, inviting interaction.\n\nOverall, it’s a casual, personal note — likely shared online (perhaps on social media or a forum) as part of a “handwriting challenge” or community engagement around calligraphy or penmanship.",
        "reasoning_content": "",
        "role": "assistant"
      },
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null
    }
  ],
  "object": "chat.completion",
  "usage": {
    "prompt_tokens": 1253,
    "completion_tokens": 222,
    "total_tokens": 1475,
    "prompt_tokens_details": {
      "image_tokens": 1234,
      "text_tokens": 19
    },
    "completion_tokens_details": {
      "text_tokens": 222
    }
  },
  "created": 1777062253,
  "system_fingerprint": null,
  "model": "qwen3.5-omni-flash",
  "id": "chatcmpl-f7bde975-e2d2-9609-b890-bb1eb983f853",
  "meta": {
    "usage": {
      "credits_used": 2574,
      "usd_spent": 0.001287
    }
  }
}
```

{% endcode %}

</details>

***

## :cinema: Video analysis

<details>

<summary>Supported Model List</summary>

* [alibaba/qwen3.5-omni-plus](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.5-omni-plus)
* [alibaba/qwen3.5-omni-flash](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3.5-omni-flash)

</details>

{% tabs %}
{% tab title="Python" %}
{% code overflow="wrap" %}

```python
import requests
import json   # for getting a structured output with indentation

response = requests.post(
    url = "https://api.aimlapi.com/v1/chat/completions",
    headers = {
        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:  
        "Authorization": "Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type": "application/json"
    },

    json = {
        "model": "alibaba/qwen3.5-omni-flash",
        "messages": [
            {
                "role": "user",
                "content": [
                    {
                        "type": "text",
                        "text": "Describe this scene:"
                    },
                    {
                        "type": "video_url",
                        "video_url": {
                            "url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/aimlapi.mp4"
                        }
                    }
                ]
            }
        ]
    }
)

data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))
```

{% endcode %}
{% endtab %}
{% endtabs %}

<details>

<summary>Response</summary>

{% code overflow="wrap" %}

```json
{
  "choices": [
    {
      "message": {
        "content": "This image shows a piece of lined notebook paper with handwritten text in black ink. The handwriting is neat, flowing, and appears to be written with a fountain pen — as noted in the text itself.\n\nThe content reads:\n\n> This is a handwriting test to see how it looks on lined paper. For the past two weeks I have been trying to improve my writing along with learning how to write with fountain pens. If you have any suggestions, tips or free resources I would love to check it out. Hope everyone is having a good day. :)\n\nAt the end, there’s a simple smiley face drawn with two dots for eyes and a curved line for a mouth.\n\nThe writer is sharing their progress in improving their handwriting while also learning to use fountain pens, and they’re politely asking for advice or recommendations from others. The tone is friendly and open-ended, inviting interaction.\n\nOverall, it’s a casual, personal note — likely shared online (perhaps on social media or a forum) as part of a “handwriting challenge” or community engagement around calligraphy or penmanship.",
        "reasoning_content": "",
        "role": "assistant"
      },
      "finish_reason": "stop",
      "index": 0,
      "logprobs": null
    }
  ],
  "object": "chat.completion",
  "usage": {
    "prompt_tokens": 1253,
    "completion_tokens": 222,
    "total_tokens": 1475,
    "prompt_tokens_details": {
      "image_tokens": 1234,
      "text_tokens": 19
    },
    "completion_tokens_details": {
      "text_tokens": 222
    }
  },
  "created": 1777062253,
  "system_fingerprint": null,
  "model": "qwen3.5-omni-flash",
  "id": "chatcmpl-f7bde975-e2d2-9609-b890-bb1eb983f853",
  "meta": {
    "usage": {
      "credits_used": 2574,
      "usd_spent": 0.001287
    }
  }
}
```

{% endcode %}

</details>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.aimlapi.com/capabilities/vision-in-text-models.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
