video-o1-video-to-video-reference

This documentation is valid for the following list of our models:

  • klingai/video-o1-video-to-video-reference

Model Overview

The model that performs video-to-video editing by applying a reference style or identity to source footage, enabling appearance transfer across clips while preserving motion and structure from the original video. The model is well-suited for maintaining consistent characters, branding elements, or artistic style over multiple outputs derived from related source videos.

How to Make a Call

Step-by-Step Instructions

1️ Setup You Can’t Skip

▪️ Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). ▪️ Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI.

2️ Copy the code example

At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.

3️ Modify the code example

▪️ Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key. ▪️ Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request.

4️ (Optional) Adjust other optional parameters if needed

Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding API schema, which lists all available parameters and usage notes.

5️ Run your modified code

Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds.

API Schema

post
Body
modelstring · enumRequiredPossible values:
promptstring · max: 2500Required

The text description of the scene, subject, or action to generate in the video.

video_urlstring · uriRequired

A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation.

image_liststring · uri[] · min: 1 · max: 4Optional

Array of image URLs for multi-image-to-video generation.

aspect_ratiostring · enumOptional

The aspect ratio of the generated video.

Default: 16:9Possible values:
durationinteger · enumOptional

The length of the output video in seconds.

Default: 5Possible values:
keep_audiobooleanOptional

Whether to keep the original audio from the video.

Default: false
Responses
200Success
application/json
post
/v2/video/generations
200Success

Retrieve the generated video from the server

After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its generation_id, obtained from the endpoint described above. If the video generation task status is complete, the response will include the final result — with the generated video URL and additional metadata.

get
Query parameters
generation_idstringRequired
Responses
200

Successfully generated video

application/json
get
/v2/video/generations
200

Successfully generated video

Code Example

The code below creates a video generation task, then automatically polls the server every 15 seconds until it finally receives the video URL.

Response

Processing time: ~ 3 min 23 sec.

Generated video (1920x1080, without sound):

Last updated

Was this helpful?