video-o1-video-to-video-edit

This documentation is valid for the following list of our models:

  • klingai/video-o1-video-to-video-edit

The model transforms an input video according to a natural-language text prompt, altering style, visual attributes, or the overall look of the scene while preserving the original motion and structural layout of the footage.

How to Make a Call

Step-by-Step Instructions

1️ Setup You Can’t Skip

▪️ Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). ▪️ Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI.

2️ Copy the code example

At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.

3️ Modify the code example

▪️ Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key. ▪️ Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request.

4️ (Optional) Adjust other optional parameters if needed

Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding API schema, which lists all available parameters and usage notes.

5️ Run your modified code

Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds.

API Schemas

Generating a video using this model involves sequentially calling two endpoints:

  • The first one is for creating and sending a video generation task to the server (returns a generation ID).

  • The second one is for requesting the generated video from the server using the generation ID received from the first endpoint.

Below, you can find two corresponding API schemas and an example with both endpoint calls.

Create a video generation task and send it to the server

post
Body
modelstring · enumRequiredPossible values:
promptstring · max: 2500Required

The text description of the scene, subject, or action to generate in the video.

video_urlstring · uriRequired

A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation.

image_liststring · uri[] · min: 1 · max: 7Optional

Array of image URLs for multi-image-to-video generation.

keep_audiobooleanOptional

Whether to keep the original audio from the video.

Default: false
Responses
200Success
application/json
post
/v2/video/generations
200Success

Retrieve the generated video from the server

After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its generation_id, obtained from the endpoint described above. If the video generation task status is complete, the response will include the final result — with the generated video URL and additional metadata.

get
Query parameters
generation_idstringRequired
Responses
200

Successfully generated video

application/json
get
/v2/video/generations
200

Successfully generated video

Code Example

The code below creates a video generation task, then automatically polls the server every 15 seconds until it finally receives the video URL.

Response

Processing time: ~ 3 min 55 sec.

Generated video (1940x1068, without sound):

Last updated

Was this helpful?