# Roo Code

## About

Roo Code is an autonomous AI programming agent that works right inside your editor, such as VS Code. It helps you code faster and smarter — whether you're starting a new project, maintaining existing code, or exploring new technologies.

You can find the Roo Code repository and community on [GitHub](https://github.com/RooCodeInc/Roo-Code).

## Installing Roo Code in VS Code

1. Open the **Extensions** tab in the VS Code sidebar.

<figure><img src="/files/66UUCPrWu5gAs6w3Iez1" alt=""><figcaption></figcaption></figure>

2. In the search bar, type **Roo Code**.
3. Find the extension and click **Install**.

<figure><img src="/files/jLEXWljwlms4jxQblJnm" alt=""><figcaption></figcaption></figure>

4. After installation, a separate **Roo Code** tab will appear in the sidebar.

<figure><img src="/files/O2RKTUAAaLkGJIanljL0" alt=""><figcaption></figcaption></figure>

## **Configuring Roo Code**

1. Go to the **Roo Code** tab in the sidebar.
2. Click the gear icon in the top-right corner.

<figure><img src="/files/yUwKAJEX20tOxJ2pHtI1" alt=""><figcaption></figcaption></figure>

In the settings:

* Set **API Provider** to **OpenAI Compatible**.
* In **Base URL**, enter one of our available endpoints.
* In **API Key**, enter your [AI/ML API key](https://aimlapi.com/app/keys).
* In **Model ID**, specify the model name. You can find some model selection tips in our [description of code generation as a capability](/capabilities/code-generation.md).
* Click **Save** and **Done**.

<figure><img src="/files/UvOzyJJHYc8Ar4w2VAXh" alt=""><figcaption></figcaption></figure>

All done — start coding with Roo Code!

{% hint style="info" %}
Roo Code offers a wide range of configurable parameters, and most of them come with a description of their purpose right below.

<img src="/files/KlZeGe3BVA602WbgWeDb" alt="" data-size="original">
{% endhint %}

## **Supported Models**

These models have been tested by our team for compatibility with Roo Code integration.

<details>

<summary>Supported Model List</summary>

* [gpt-3.5-turbo](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-3.5-turbo-0125](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-3.5-turbo-1106](/api-references/text-models-llm/openai/gpt-3.5-turbo.md)
* [gpt-4o](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-05-13](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-08-06](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-mini](/api-references/text-models-llm/openai/gpt-4o-mini.md)
* [gpt-4o-mini-2024-07-18](/api-references/text-models-llm/openai/gpt-4o-mini.md)
* [chatgpt-4o-latest](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-05-13](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4o-2024-08-06](/api-references/text-models-llm/openai/gpt-4o.md)
* [gpt-4-turbo](/api-references/text-models-llm/openai/gpt-4-turbo.md)
* [gpt-4-turbo-2024-04-09](/api-references/text-models-llm/openai/gpt-4-turbo.md)
* [gpt-4-0125-preview](/api-references/text-models-llm/openai/gpt-4-preview.md)
* [gpt-4-1106-preview](/api-references/text-models-llm/openai/gpt-4-preview.md)
* [o3-mini](/api-references/text-models-llm/openai/o3-mini.md)
* [openai/gpt-4.1-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1.md)
* [openai/gpt-4.1-mini-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1-mini.md)
* [openai/gpt-4.1-nano-2025-04-14](/api-references/text-models-llm/openai/gpt-4.1-nano.md)
* [openai/o4-mini-2025-04-16](/api-references/text-models-llm/openai/o4-mini.md)
* [deepseek/deepseek-chat](/api-references/text-models-llm/deepseek/deepseek-chat.md)
* [deepseek/deepseek-r1](/api-references/text-models-llm/deepseek/deepseek-r1.md)
* [meta-llama/Llama-3.3-70B-Instruct-Turbo](/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo.md)
* [Qwen/Qwen2.5-7B-Instruct-Turbo](/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo.md)
* [qwen-max](/api-references/text-models-llm/alibaba-cloud/qwen-max.md)
* [qwen-max-2025-01-25](/api-references/text-models-llm/alibaba-cloud/qwen-max.md)
* [qwen-plus](/api-references/text-models-llm/alibaba-cloud/qwen-plus.md)
* [qwen-turbo](/api-references/text-models-llm/alibaba-cloud/qwen-turbo.md)
* [anthracite-org/magnum-v4-72b](/api-references/text-models-llm/anthracite/magnum-v4.md)
* [google/gemini-2.0-flash](/api-references/text-models-llm/google/gemini-2.0-flash.md)
* [mistralai/mistral-nemo](/api-references/text-models-llm/mistral-ai/mistral-nemo.md)
* [MiniMax-Text-01](/api-references/text-models-llm/minimax/text-01.md)
* [x-ai/grok-3-beta](/api-references/text-models-llm/xai/grok-3-beta.md)
* [x-ai/grok-3-mini-beta](/api-references/text-models-llm/xai/grok-3-mini-beta.md)

</details>

## Troubleshooting

Possible Issues:

* **403 status code (no body)** — This is the most common error. Possible causes:
  * You might need to use a different endpoint. Be sure to refer to the documentation for the specific model you've selected from our catalog!
  * The user may have run out of tokens or doesn’t have enough. Check your balance in your account dashboard.
* **400 status code (no body)** — This error occurs when using models that are not compatible with the integration. See the previous section [Supported Models](#supported-models) :point\_up:


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.aimlapi.com/integrations/roo-code.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
