# Cline

## About

Cline is an open-source AI coding assistant with two working modes (Plan/Act), terminal command execution, and support for the Model Context Protocol (MCP) in VS Code.

You can find the Cline repository and community on [GitHub](https://github.com/cline).

## Installing Cline in VS Code

1. Open the **Extensions** tab in the VS Code sidebar.

<figure><img src="/files/P9JCpYBmOCH0vnPsmnIM" alt=""><figcaption></figcaption></figure>

2. In the search bar, type **Cline**.
3. Find the extension and click **Install**.

<figure><img src="/files/P42L60QyEqZ8CZpZNBm2" alt=""><figcaption></figcaption></figure>

4. After installation, a separate **Cline** tab will appear in the sidebar.

<figure><img src="/files/abP7Xww1Rlcco0lkI0zr" alt=""><figcaption></figcaption></figure>

## **Configuring Cline**

1. Go to the **Cline** tab in the sidebar.
2. Click the gear icon in the top-right corner.

<figure><img src="/files/6K2X9vVOLGKtQg9iSPJU" alt=""><figcaption></figcaption></figure>

In the settings:

* Set **API Provider** to **OpenAI Compatible**.
* In **Base URL**, enter one of our available endpoints.
* In **API Key**, enter your [AI/ML API key](https://aimlapi.com/app/keys).
* In **Model ID**, specify the model name. You can find some model selection tips in our [description of code generation as a capability](/capabilities/code-generation.md).
* Click **Save**.

All done — start coding with Cline!

## Usage Example

Here’s the request we made:

```
Create a Python file named test and add code to print Hello, world
```

<figure><img src="/files/5EAeDo6OdsPZfdRC7kun" alt=""><figcaption></figcaption></figure>

If you expand the **API Request** section, you can view the data — including your prompt.

Since I asked to create a file in the request, the file was generated. You can see a preview and its contents, but it hasn’t been saved yet.

To save the file, Cline asks for confirmation.

<figure><img src="/files/WUUKdy7G19eTtnesffEn" alt=""><figcaption></figcaption></figure>

Once the file is saved, a second API request appears with metadata, along with a notification that the task was successfully completed.

## **Supported Models**

These models have been tested by our team for compatibility with Cline integration.

<details>

<summary>Supported Model List</summary>

* [gpt-3.5-turbo](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-3.5-turbo-0125](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-3.5-turbo-1106](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-4o](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-05-13](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-08-06](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-mini](/api-references/text-models-llm/openai/gpt-4o-mini.md)
* [gpt-4o-mini-2024-07-18](/api-references/text-models-llm/openai/gpt-4o-mini.md)
* [chatgpt-4o-latest](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-05-13](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-08-06](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4-turbo](/api-references/text-models-llm/openai/gpt-4-turbo.md)
* [gpt-4-turbo-2024-04-09](/api-references/text-models-llm/openai/gpt-4-turbo.md)
* [gpt-4-0125-preview](/api-references/text-models-llm/openai/gpt-4-preview.md)
* [gpt-4-1106-preview](/api-references/text-models-llm/openai/gpt-4-preview.md)
* [o3-mini](/api-references/text-models-llm/openai/o3-mini.md)
* [openai/gpt-4.1-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1.md)
* [openai/gpt-4.1-mini-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1-mini.md)
* [openai/gpt-4.1-nano-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1-nano.md)
* [openai/o4-mini-2025-04-16](/api-references/text-models-llm/openai/o4-mini.md)
* [deepseek/deepseek-chat](/api-references/text-models-llm/deepseek/deepseek-chat.md)
* [deepseek/deepseek-r1](/api-references/text-models-llm/deepseek/deepseek-r1.md)
* [meta-llama/Llama-3.3-70B-Instruct-Turbo](/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo.md)
* [Qwen/Qwen2.5-7B-Instruct-Turbo](/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo.md)
* [qwen-max](/api-references/text-models-llm/alibaba-cloud/qwen-max.md)
* [qwen-max-2025-01-25](/api-references/text-models-llm/alibaba-cloud/qwen-max.md)
* [qwen-plus](/api-references/text-models-llm/alibaba-cloud/qwen-plus.md)
* [qwen-turbo](/api-references/text-models-llm/alibaba-cloud/qwen-turbo.md)
* [anthracite-org/magnum-v4-72b](/api-references/text-models-llm/anthracite/magnum-v4.md)
* [google/gemini-2.0-flash](/api-references/text-models-llm/google/gemini-2.0-flash.md)
* [mistralai/mistral-nemo](/api-references/text-models-llm/mistral-ai/mistral-nemo.md)
* [MiniMax-Text-01](/api-references/text-models-llm/minimax/text-01.md)
* [x-ai/grok-3-beta](/api-references/text-models-llm/xai/grok-3-beta.md)
* [x-ai/grok-3-mini-beta](/api-references/text-models-llm/xai/grok-3-mini-beta.md)

</details>

## Troubleshooting

Possible Issues:

* **403 status code (no body)** — This is the most common error. Possible causes:
  * You might need to use a different endpoint. Be sure to refer to the documentation for the specific model you've selected from our catalog!
  * The user may have run out of tokens or doesn’t have enough. Check your balance in your account dashboard.
* **400 status code (no body)** — This error occurs when using models that are not compatible with the integration. See the previous section [Supported Models](#supported-models) :point\_up:


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.aimlapi.com/integrations/cline.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
