act_two

This documentation is valid for the following list of our models:

  • runway/act_two

This video-to-video model lets you animate characters using reference performance videos. Simply provide a video of someone acting out a scene along with a character reference (image or video), and Act-Two will transfer the performance to your character — including natural motion, speech, and facial expressions.

How to Make a Call

Step-by-Step Instructions

Generating a video using this model involves sequentially calling two endpoints:

  • The first one is for creating and sending a video generation task to the server (returns a generation ID).

  • The second one is for requesting the generated video from the server using the generation ID received from the first endpoint.

Below, you can find both corresponding API schemas.

API Schemas

Video Generation

You can generate a video using this API. In the basic setup, you only need an image or video URL for the character (character), and a video URL for body movements and/or facial expressions (reference).

post
Body
modelstring · enumRequiredPossible values:
characterone ofRequired

The character to control. You can either provide a video or an image. A visually recognizable face must be visible and stay within the frame.

or
frame_sizestring · enumOptional

The width and height of the video.

Default: 1280:720Possible values:
body_controlbooleanOptional

A boolean indicating whether to enable body control. When enabled, non-facial movements and gestures will be applied to the character in addition to facial expressions.

expression_intensityinteger · min: 1 · max: 5Optional

An integer between 1 and 5 (inclusive). A larger value increases the intensity of the character's expression.

Default: 3
seedinteger · max: 4294967295Optional

Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen.

Responses
post
/v2/video/generations
200Success

Retrieve the generated video from the server

After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its id, obtained from the endpoint described above. If the video generation task status is completed, the response will include the final result — with the generated video URL and additional metadata.

get
Authorizations
AuthorizationstringRequired

Bearer key

Query parameters
generation_idstringRequiredExample: <REPLACE_WITH_YOUR_GENERATION_ID>
Responses
get
/v2/video/generations
200Success

Full Example: Generating and Retrieving the Video From the Server

How it works

As the character reference, we will use a scan of a famous Leonardo da Vinci painting. For the motion reference, we will use a video of a cheerful woman dancing, generated with the kling-video/v1.6/pro/text-to-video model.

Character reference image
Motion reference video

We combine both POST and GET methods above in one program: first it sends a video generation request to the server, then it checks for results every 10 seconds.

Response

Processing time: ~45 sec.

Original: 784×1168

Low-res GIF preview:

Low-resolution GIF preview

Last updated

Was this helpful?