AI Search Engine
Last updated
Was this helpful?
Last updated
Was this helpful?
AI Web Search Engine is designed to retrieve real-time information from the internet. This solution processes user queries and return relevant data from various online sources, making them useful for tasks that require up-to-date knowledge beyond static datasets. It supports two usage options:
Using six specialized API endpoints, each designed to search for only one specific type of information. These endpoints return structured responses, making them more suitable for integration into specialized services (e.g., a weather widget). Here are the types of information you can retrieve this way:
See API references and examples on the subpages.
As a general chat completion solution (but searching on the internet): enter a query in the prompt and receive an internet-sourced answer, similar to asking a question on a search engine through a browser. See the API Schema below or check how this call is made in the Python example on the bottom of this page.
Response:
Creates a chat completion using a language model, allowing interactive conversation by predicting the next response based on the given chat history. This is useful for AI-driven dialogue systems and virtual assistants.
/v1/chat/completions
bagoodex/bagoodex-search-v1
If true, the new message will be prepended with the last message if they belong to the same role.
If true, the generation prompt will be added to the chat template. This is a parameter used by chat template in tokenizer config of the model.
If true, special tokens (e.g. BOS) will be added to the prompt on top of what is added by the chat template. For most models, the chat template takes care of adding the special tokens so this should be set to False (as is the default).
A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing "title" and "text" keys.
A Jinja template to use for this conversion. If this is not passed, the model's default chat template will be used instead.
Additional kwargs to pass to the template renderer. Will be accessible by the chat template
Whether to include the stop string in the output. This is only applied when the stop or stop_token_ids is set
If specified, the output will follow the JSON schema.
If specified, the output will follow the regex pattern.
If specified, the output will be exactly one of the choices.
If specified, the output will follow the context free grammar.
If specified, will override the default guided decoding backend of the server for this specific request. If set, must be either 'outlines' / 'lm-format-enforcer'
outlines
, lm-format-enforcer
If specified, will override the default whitespace pattern for guided json decoding.
No body