gpt-4o-search-preview
Model Overview
A specialized model trained to understand and execute web search queries with the Chat completions API.
How to Make a Call
API Schema
Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.
Authorizations
Body
modelundefined Β· enumRequiredPossible values:
max_tokensnumber Β· min: 1OptionalDefault:
The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API.
512
streambooleanOptionalDefault:
If set to True, the model response data will be streamed to the client as it is generated using server-sent events.
false
Responses
201Success
post
POST /v1/chat/completions HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 336
{
"model": "gpt-4o-search-preview",
"messages": [
{
"role": "user",
"content": "text",
"name": "text"
}
],
"max_tokens": 512,
"stream": false,
"stream_options": {
"include_usage": true
},
"web_search_options": {
"search_context_size": "low",
"user_location": {
"approximate": {
"city": "text",
"country": "text",
"region": "text",
"timezone": "text"
},
"type": "approximate"
}
}
}
201Success
No content
Code Example
import requests
response = requests.post(
"https://api.aimlapi.com/v1/chat/completions",
headers={
"Content-Type":"application/json",
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
"Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
"Content-Type":"application/json"
},
json={
"model":"gpt-4o-search-preview",
"messages":[
{
"role":"user",
# Insert your question for the model here, instead of Hello:
"content":"Hello"
}
]
}
)
data = response.json()
print(data)
Last updated
Was this helpful?