Features of Anthropic Models
Overview
Models from Anthropic can be accessed not only via the standard /v1/chat/completions endpoint but also through dedicated endpoints — /messages and /v1/batches and /v1/batches/cancel/{batch_id}.
The sections below describe their API schemas, usage specifics, and example requests.
Supported capabilities:
Text completions: Build advanced chat bots or text processors.
Function Calling: Utilize tools for specific tasks and API calling.
Stream mode: Get the text chat model responses as they are generated, rather than waiting for the entire response to be completed.
Batch Processing: Send multiple independent requests in a single API call.
Vision Tasks: Process and analyze images.
Text Completions
Ask something and get an answer in a chat-like conversation format.
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
1024Custom text sequences that will cause the model to stop generating.
If set to True, the model response data will be streamed to the client as it is generated using server-sent events.
falseA system prompt is a way of providing context and instructions to Claude, such as specifying a particular goal or role.
Amount of randomness injected into the response. Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks. Note that even with temperature of 0.0, the results will not be fully deterministic.
1Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {"type": "function", "function": {"name": "my_function"}} forces the model to call that tool. none is the default when no tools are present. auto is the default if tools are present.
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
No content
POST /messages HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer YOUR_SECRET_TOKEN
Content-Type: application/json
Accept: */*
Content-Length: 444
{
"model": "claude-3-opus-20240229",
"messages": [
{
"role": "user",
"content": "text"
}
],
"max_tokens": 1024,
"metadata": {
"ANY_ADDITIONAL_PROPERTY": "text"
},
"stop_sequences": [
"text"
],
"stream": false,
"system": "text",
"temperature": 1,
"tool_choice": {
"type": "auto"
},
"tools": [
{
"name": "text",
"description": "text",
"input_schema": {
"type": "object",
"properties": null,
"ANY_ADDITIONAL_PROPERTY": null
}
}
],
"top_k": 1,
"top_p": 1,
"thinking": {
"budget_tokens": 1,
"type": "enabled"
}
}No content
Function Calling
To process text and use function calling, follow the examples below:
Example #1: Get Weather Information
import requests
url = "https://api.aimlapi.com/messages"
headers = {
"Authorization": "Bearer YOUR_AIMLAPI_KEY",
"Content-Type": "application/json"
}
payload = {
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
}
}
}
],
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
}
],
"stream": false
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())Example #2: Simple Text Response
import requests
url = "https://api.aimlapi.com/messages"
headers = {
"Authorization": "Bearer YOUR_AIMLAPI_KEY",
"Content-Type": "application/json"
}
payload = {
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"messages": [
{
"role": "user",
"content": "How are you?"
}
],
"stream": false
}
response = requests.post(url, json=payload, headers=headers)
print(response.json())Pro tip: you can assign a system role to the Claude models by using "system" parameter outside of messages array.
{
model="claude-3-5-sonnet-20240620",
max_tokens=2048,
# role prompt:
system="You are a seasoned data scientist at a Fortune 500 company.",
messages=[
{"role": "user", "content": "Analyze this dataset for anomalies: <dataset>{{DATASET}}</dataset>"}
]
}Streaming Mode
To enable streaming of responses, set stream=True in your request payload.
import requests
url = "https://api.aimlapi.com/messages"
headers = {
"Authorization": "Bearer YOUR_AIMLAPI_KEY",
"Content-Type": "application/json"
}
payload = {
"model": "claude-3-5-sonnet-20240620",
"max_tokens": 1024,
"tools": [
{
"name": "get_weather",
"description": "Get the current weather in a given location",
"input_schema": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
}
}
}
}
],
"messages": [
{
"role": "user",
"content": "What is the weather like in San Francisco?"
}
]Batch Processing
Due to the complexity of its description, this capability has been placed on a separate page.
Vision
Possible media types:
image/jpegimage/pngimage/gifimage/webp
import httpx
import base64
from openai import OpenAI
client = OpenAI(
base_url='https://api.aimlapi.com',
api_key='<YOUR_AIMLAPI_KEY>'
)
image_url = "https://upload.wikimedia.org/wikipedia/commons/a/a7/Camponotus_flavomarginatus_ant.jpg"
image_media_type = "image/jpeg"
image_data = base64.standard_b64encode(httpx.get(image_url).content).decode("utf-8")
response = client.chat.completions.create(
model="claude-3-5-sonnet-latest",
messages=[
{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": image_media_type,
"data": imag1_data,
},
},
{
"type": "text",
"text": "Describe this image."
}
],
}
],
)
print(response)Response Format
The responses from the AI/ML API for Anthropic models will typically include the generated text or results from the tool called. Here is an example response for a weather query:
{
"id": "msg-12345",
"object": "message",
"created": 1627684940,
"model": "claude-3-5-sonnet-20240620",
"choices": [
{
"message": {
"role": "assistant",
"content": "The weather in San Francisco is currently sunny with a temperature of 68°F."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 15,
"total_tokens": 25
}
}Last updated
Was this helpful?