Embedding Models
Last updated
Was this helpful?
Last updated
Was this helpful?
We support multiple embedding models. You can find the along with API reference links at the end of the page.
Embeddings from AI/ML API quantify the similarity between text strings. These embeddings are particularly useful for:
Search: Rank search results by their relevance to a query.
Clustering: Group similar text strings together.
Recommendations: Suggest items based on related text strings.
Anomaly Detection: Identify outliers that differ significantly from the norm.
Diversity Measurement: Analyze the spread of similarities within a dataset.
Classification: Categorize text strings by comparing them to labeled examples.
An embedding is a vector (list) of floating-point numbers, where the distance between vectors indicates their relatedness. Smaller distances indicate higher similarity, while larger distances suggest lower similarity.
For more information on Embeddings pricing, visit our . Costs are calculated based on the number of tokens in the input.
The response will include the embedding vector and additional metadata:
By default, the length of the embedding vector is 1536 for text-embedding-3-small
or 3072 for text-embedding-3-large
. You can reduce the dimensions of the embedding using the dimensions
parameter without losing its ability to represent concepts. More details on embedding dimensions can be found in the embedding use case section.
Here's how to use the embeddings API in Python:
This Python example shows how to set up an API client, send text to the embeddings API, and handle the response to extract and print the embedding vector.
Open AI
8000
-
Open AI
8000
Open AI
8000
Together AI
32000
BAAI
Together AI
BAAI
Together AI
8000
Anthropic
16000
Anthropic
32000
-
Anthropic
32000
-
Anthropic
16000
-
Anthropic
16000
-
Anthropic
16000
-
Anthropic
4000
-
3000
-
2000
2000
2000
-