v1-standard/image-to-video
A model transforms static images into dynamic video clips.
Setup your API Key
If you don’t have an API key for the AI/ML API yet, feel free to use our Quickstart guide.
API Schemas
Generating a video using this model involves sequentially calling two endpoints:
The first one is for creating and sending a video generation task to the server (returns a generation ID).
The second one is for requesting the generated video from the server using the generation ID received from the first endpoint.
Create a video generation task and send it to the server
The ratio and aspect_ratio parameters are deprecated. The aspect ratio of the generated video is solely determined by the aspect ratio of the input reference image.
Bearer key
A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video.
The text description of the scene, subject, or action to generate in the video.
A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video.
URL of the image for Static Brush Application Area (Mask image created by users using the motion brush).
The description of elements to avoid in the generated video.
The length of the output video in seconds.
The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt.
0.5Customized Task ID
Successfully generated video
Successfully generated video
Retrieve the generated video from the server
After sending a request for video generation, this task is added to the queue. Based on the service's load, the generation can be completed in seconds or take a bit more.
Bearer key
Successfully generated video
Successfully generated video
Full Example: Generating and Retrieving the Video From the Server
We have a classic reproduction of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses.
Last updated
Was this helpful?
