Features of Anthropic Models

Features

  • Text completions: Build advanced chat bots or text processors

  • Function Calling: Utilize tools for specific tasks and API calling.

  • Vision Tasks: Process and analyze images.

Example Requests

post
Authorizations
Body
modelstring · enumRequiredPossible values:
max_tokensnumberOptional

The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.

Default: 512
stop_sequencesstring[]Optional

Custom text sequences that will cause the model to stop generating.

streambooleanOptional

If set to True, the model response data will be streamed to the client as it is generated using server-sent events.

Default: false
systemstringOptional

A system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role.

temperaturenumber · max: 1Optional

Amount of randomness injected into the response. Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.

Default: 1
tool_choiceany ofOptional

Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.

or
or
or
top_knumberOptional

Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.

top_pnumberOptional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.

Responses
201Success
post
POST /messages HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 443

{
  "model": "claude-3-opus-20240229",
  "messages": [
    {
      "role": "user",
      "content": "text"
    }
  ],
  "max_tokens": 512,
  "metadata": {
    "ANY_ADDITIONAL_PROPERTY": "text"
  },
  "stop_sequences": [
    "text"
  ],
  "stream": false,
  "system": "text",
  "temperature": 1,
  "tool_choice": {
    "type": "auto"
  },
  "tools": [
    {
      "name": "text",
      "description": "text",
      "input_schema": {
        "type": "object",
        "properties": null,
        "ANY_ADDITIONAL_PROPERTY": null
      }
    }
  ],
  "top_k": 1,
  "top_p": 1,
  "thinking": {
    "budget_tokens": 1,
    "type": "enabled"
  }
}
201Success

No content

Vision

Note: API only support BASE64 String as Image input.

Possible Media Types

  • image/png

  • image/gif

  • image/webp

import httpx
import base64
from openai import OpenAI

client = OpenAI(
    base_url='https://api.aimlapi.com',
    api_key='<YOUR_AIMLAPI_KEY>'    
)  

image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_media_type = "image/jpeg"
image_data = base64.standard_b64encode(httpx.get(image_url).content).decode("utf-8")

response = client.chat.completions.create(
    model="claude-3-5-sonnet-latest",
    messages=[
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "source": {
                    "type": "base64",
                    "media_type": image_media_type,
                    "data": imag1_data,
                },
            },
            {
                "type": "text",
                "text": "Describe this image."
            }
        ],
    }
],
)
print(response)

Function Calling

To process text and use function calling, follow the examples below:

Example Request 1: Get Weather Information

import requests

url = "https://api.aimlapi.com/messages"
headers = {
    "Authorization": "Bearer YOUR_AIMLAPI_KEY",
    "Content-Type": "application/json"
}
payload = {
  "model": "claude-3-5-sonnet-20240620",
  "max_tokens": 1024,
  "tools": [
    {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "input_schema": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          }
        }
      }
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": "What is the weather like in San Francisco?"
    }
  ],
  "stream": false
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())

Example Request 2: Simple Text Response

import requests

url = "https://api.aimlapi.com/messages"
headers = {
    "Authorization": "Bearer YOUR_AIMLAPI_KEY",
    "Content-Type": "application/json"
}
payload = {
  "model": "claude-3-5-sonnet-20240620",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "How are you?"
    }
  ],
  "stream": false
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())

Pro tip: you can assign a system role to the Claude models by using "system" parameter outside of messages array.

{
    model="claude-3-5-sonnet-20240620",
    max_tokens=2048,

    # role prompt:
    system="You are a seasoned data scientist at a Fortune 500 company.", 
    messages=[
        {"role": "user", "content": "Analyze this dataset for anomalies: <dataset>{{DATASET}}</dataset>"}
    ]
}

Example Response

The responses from the AI/ML API for Anthropic models will typically include the generated text or results from the tool called. Here is an example response for a weather query:

{
  "id": "msg-12345",
  "object": "message",
  "created": 1627684940,
  "model": "claude-3-5-sonnet-20240620",
  "choices": [
    {
      "message": {
        "role": "assistant",
        "content": "The weather in San Francisco is currently sunny with a temperature of 68°F."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 10,
    "completion_tokens": 15,
    "total_tokens": 25
  }
}

Streaming with Python SDK

To enable streaming of responses, set stream=True in your request payload.

import requests

url = "https://api.aimlapi.com/messages"
headers = {
    "Authorization": "Bearer YOUR_AIMLAPI_KEY",
    "Content-Type": "application/json"
}
payload = {
  "model": "claude-3-5-sonnet-20240620",
  "max_tokens": 1024,
  "tools": [
    {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "input_schema": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          }
        }
      }
    }
  ],
  "messages": [
    {
      "role": "user",
      "content": "What is the weather like in San Francisco?"
    }
  ]

Last updated

Was this helpful?