video-v2.6-pro/motion-control
Model Overview
A next-generation cinematic video generation model developed by KlingAI. It focuses on transferring motion from reference videos to arbitrary target characters, producing smooth, realistic movement, detailed visuals, and native audio when enabled.
How to Make a Call
API Schemas
Create a video generation task and send it to the server
Optional instructions that define the background elements, including their appearance, timing in the frame, and behavior, and can also subtly adjust the character’s animation.
A direct link to an online image or a Base64-encoded local image that serves as the character reference for animation. The image must contain exactly one clearly visible character, who will be animated using the motion from the reference video provided in the video_url parameter. For optimal results, be sure the character’s proportions in the image match those in the video.
A HTTPS URL pointing to a video or a data URI containing a video. The character’s movements from this video will be applied to the character from the image provided in the image_url parameter. For best results, use a video with a single clearly visible character. If the video contains two or more characters, the motion of the character occupying the largest portion of the frame will be used for generation.
Generate the orientation of the character in the video, which can be selected to match the image or the video:
- image: has the same orientation as the person in the picture; At this time, the reference video duration should not exceed 10 seconds;
- video: consistent with the orientation of the characters in the video; At this time, the reference video duration should not exceed 30 seconds;
imagePossible values: Whether to keep the original audio from the video.
trueRetrieve the generated video from the server
After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its id, obtained from the endpoint described above.
If the video generation task status is completed, the response will include the final result — with the generated video URL and additional metadata.
Bearer key
<REPLACE_WITH_YOUR_GENERATION_ID>Code Example
Provide the URL of the image containing the character you want to animate.
Provide the URL of the video where another character performs the movements you want to transfer to the animated character.
If needed, describe minor background details or additional objects in the frame using the
promptparameter. Example:"At 00:03, a brightly colored parrot flies in from the left, briefly circles above the character once, and then hurries off to the right."Set the
character_orientationparameter toimageorvideo, depending on whether you want to use the character’s orientation from the image reference or from the video reference.By default, the model uses the audio track from the reference video. You can disable this behavior by setting the
keep_audioparameter tofalse.
The code below creates a video generation task, then automatically polls the server every 15 seconds until it finally receives the video URL.
Processing time: ~ 5 min 56 sec.
Generated video (1936x1072, without sound, the character’s orientation matches the orientation from the image reference):
"character_orientation":"image"Generated video (1936x1072, without sound, the character’s orientation matches the orientation from the video reference):
"character_orientation":"video"Last updated
Was this helpful?

