video-o1-reference-to-video
A variant of Kling’s O1 omni-model that takes several reference images along with an instructional prompt as input.
Setup your API Key
If you don’t have an API key for the AI/ML API yet, feel free to use our Quickstart guide.
API Schemas
Generating a video using this model involves sequentially calling two endpoints:
The first one is for creating and sending a video generation task to the server (returns a generation ID).
The second one is for requesting the generated video from the server using the generation ID received from the first endpoint.
Below, you can find two corresponding API schemas and an example with both endpoint calls.
Create a video generation task and send it to the server
The text description of the scene, subject, or action to generate in the video.
Array of image URLs for multi-image-to-video generation.
The aspect ratio of the generated video.
16:9Possible values: The length of the output video in seconds.
5Possible values: Retrieve the generated video from the server
After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its generation_id, obtained from the endpoint described above.
If the video generation task status is complete, the response will include the final result — with the generated video URL and additional metadata.
Successfully generated video
Successfully generated video
Code Example
The code below creates a video generation task, then automatically polls the server every 15 seconds until it finally receives the video URL.
Processing time: ~ 2 min 6 sec.
Generated video (1920x1080, without sound):
Last updated
Was this helpful?