v2-master/image-to-video
Model Overview
Compared to v1.6, this Kling model better aligns with the prompt and delivers more dynamic and visually appealing results.
Setup your API Key
If you don’t have an API key for the AI/ML API yet, feel free to use our Quickstart guide.
How to Make a Call
API Schemas
Create a video generation task and send it to the server
A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video.
URL of the image to be used as the last frame of the video.
URL of the image for Static Brush Application Area (Mask image created by users using the motion brush).
The description of elements to avoid in the generated video.
The length of the output video in seconds.
The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt.
0.5
Customized Task ID
The text description of the scene, subject, or action to generate in the video.
No content
Retrieve the generated video from the server
After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its generation_id
, obtained from the endpoint described above.
If the video generation task status is complete
, the response will include the final result — with the generated video URL and additional metadata.
No content
Code Example (Python)
The code below creates a video generation task, then automatically polls the server every 10 seconds until it finally receives the video URL.
This model produces highly detailed and natural-looking videos, so generation may take around 5–6 minutes for a 5-second video and 11-14 minutes for a 10-second video.
Last updated
Was this helpful?