AI/ML API Documentation
API KeyModelsPlaygroundGitHubGet Support
  • 📞Contact Sales
  • 🗯️Send Feedback
  • Quickstart
    • 🧭Documentation Map
    • Setting Up
    • Supported SDKs
  • API REFERENCES
    • 📒All Model IDs
    • Text Models (LLM)
      • AI21 Labs
        • jamba-1-5-mini
      • Alibaba Cloud
        • qwen-max
        • qwen-plus
        • qwen-turbo
        • Qwen2-72B-Instruct
        • Qwen2.5-7B-Instruct-Turbo
        • Qwen2.5-72B-Instruct-Turbo
        • Qwen2.5-Coder-32B-Instruct
        • Qwen-QwQ-32B
        • Qwen3-235B-A22B
      • Anthracite
        • magnum-v4
      • Anthropic
        • Claude 3 Haiku
        • Claude 3.5 Haiku
        • Claude 3 Opus
        • Claude 3 Sonnet
        • Claude 3.5 Sonnet
        • Claude 3.7 Sonnet
      • Cohere
        • command-r-plus
      • DeepSeek
        • DeepSeek V3
        • DeepSeek R1
      • Google
        • gemini-1.5-flash
        • gemini-1.5-pro
        • gemini-2.0-flash-exp
        • gemini-2.0-flash-thinking-exp-01-21
        • gemini-2.0-flash
        • gemini-2.5-flash-preview
        • gemini-2.5-pro-exp
        • gemini-2.5-pro-preview
        • gemma-2
        • gemma-3
      • Gryphe
        • MythoMax-L2-13b-Lite
      • Meta
        • Llama-3-chat-hf
        • Llama-3-8B-Instruct-Lite
        • Llama-3.1-8B-Instruct-Turbo
        • Llama-3.1-70B-Instruct-Turbo
        • Llama-3.1-405B-Instruct-Turbo
        • Llama-3.2-11B-Vision-Instruct-Turbo
        • Llama-3.2-90B-Vision-Instruct-Turbo
        • Llama-Vision-Free
        • Llama-3.2-3B-Instruct-Turbo
        • Llama-3.3-70B-Instruct-Turbo
        • Llama-4-scout
        • Llama-4-maverick
      • MiniMax
        • text-01
        • abab6.5s-chat
      • Mistral AI
        • codestral-2501
        • mistral-nemo
        • mistral-tiny
        • Mistral-7B-Instruct
        • Mixtral-8x22B-Instruct
        • Mixtral-8x7B-Instruct
      • NVIDIA
        • Llama-3.1-Nemotron-70B-Instruct-HF
        • llama-3.1-nemotron-70b
      • NeverSleep
        • llama-3.1-lumimaid
      • NousResearch
        • Nous-Hermes-2-Mixtral-8x7B-DPO
      • OpenAI
        • gpt-3.5-turbo
        • gpt-4
        • gpt-4-preview
        • gpt-4-turbo
        • gpt-4o
        • gpt-4o-mini
        • gpt-4o-audio-preview
        • gpt-4o-mini-audio-preview
        • gpt-4o-search-preview
        • gpt-4o-mini-search-preview
        • o1
        • o1-mini
        • o1-preview
        • o3-mini
        • gpt-4.5-preview
        • gpt-4.1
        • gpt-4.1-mini
        • gpt-4.1-nano
        • o4-mini
      • xAI
        • grok-beta
        • grok-3-beta
        • grok-3-mini-beta
    • Image Models
      • Flux
        • flux-pro
        • flux-pro/v1.1
        • flux-pro/v1.1-ultra
        • flux-realism
        • flux/dev
        • flux/dev/image-to-image
        • flux/schnell
      • Google
        • Imagen 3.0
      • OpenAI
        • DALL·E 2
        • DALL·E 3
      • RecraftAI
        • Recraft v3
      • Stability AI
        • Stable Diffusion v3 Medium
        • Stable Diffusion v3.5 Large
    • Video Models
      • Alibaba Cloud
        • Wan 2.1 (Text-to-Video)
      • Google
        • Veo2 (Image-to-Video)
        • Veo2 (Text-to-Video)
      • Kling AI
        • v1-standard/image-to-video
        • v1-standard/text-to-video
        • v1-pro/image-to-video
        • v1-pro/text-to-video
        • v1.6-standard/text-to-video
        • v1.6-standard/image-to-video
        • v1.6-pro/image-to-video
        • v1.6-pro/text-to-video
        • v1.6-standard/effects
        • v1.6-pro/effects
        • v2-master/image-to-video
        • v2-master/text-to-video
      • Luma AI
        • Text-to-Video v2
        • Text-to-Video v1 (legacy)
      • MiniMax
        • video-01
        • video-01-live2d
      • Runway
        • gen3a_turbo
        • gen4_turbo
    • Music Models
      • MiniMax
        • minimax-music [legacy]
        • music-01
      • Stability AI
        • stable-audio
    • Voice/Speech Models
      • Speech-to-Text
        • stt [legacy]
        • Deepgram
          • nova-2
        • OpenAI
          • whisper-base
          • whisper-large
          • whisper-medium
          • whisper-small
          • whisper-tiny
      • Text-to-Speech
        • Deepgram
          • aura
    • Content Moderation Models
      • Meta
        • Llama-Guard-3-11B-Vision-Turbo
        • LlamaGuard-2-8b
        • Meta-Llama-Guard-3-8B
    • 3D-Generating Models
      • Stability AI
        • triposr
    • Vision Models
      • Image Analysis
      • OCR: Optical Character Recognition
        • Google
          • Google OCR
        • Mistral AI
          • mistral-ocr-latest
      • OFR: Optical Feature Recognition
    • Embedding Models
      • Anthropic
        • voyage-2
        • voyage-code-2
        • voyage-finance-2
        • voyage-large-2
        • voyage-large-2-instruct
        • voyage-law-2
        • voyage-multilingual-2
      • BAAI
        • bge-base-en
        • bge-large-en
      • Google
        • textembedding-gecko
        • text-multilingual-embedding-002
      • OpenAI
        • text-embedding-3-large
        • text-embedding-3-small
        • text-embedding-ada-002
      • Together AI
        • m2-bert-80M-retrieval
  • Solutions
    • Bagoodex
      • AI Search Engine
        • Find Links
        • Find Images
        • Find Videos
        • Find the Weather
        • Find a Local Map
        • Get a Knowledge Structure
    • OpenAI
      • Assistants
        • Assistant API
        • Thread API
        • Message API
        • Run and Run Step API
        • Events
  • Use Cases
    • Create Images: Illustrate an Article
    • Animate Images: A Children’s Encyclopedia
    • Create an Assistant to Discuss a Specific Document
    • Create a 3D Model from an Image
    • Create a Looped GIF for a Web Banner
    • Read Text Aloud and Describe Images: Support People with Visual Impairments
    • Find Relevant Answers: Semantic Search with Text Embeddings
    • Summarize Websites with AI-Powered Chrome Extension
  • Capabilities
    • Completion and Chat Completion
    • Streaming Mode
    • Code Generation
    • Thinking / Reasoning
    • Function Calling
    • Vision in Text Models (Image-To-Text)
    • Web Search
    • Features of Anthropic Models
    • Model comparison
  • FAQ
    • Can I use API in Python?
    • Can I use API in NodeJS?
    • What are the Pro Models?
    • How to use the Free Tier?
    • Are my requests cropped?
    • Can I call API in the asynchronous mode?
    • OpenAI SDK doesn't work?
  • Errors and Messages
    • General Info
    • Errors with status code 4xx
    • Errors with status code 5xx
  • Glossary
    • Concepts
  • Integrations
    • 🧩Our Integration List
    • Langflow
    • LiteLLM
Powered by GitBook
On this page

Was this helpful?

  1. API REFERENCES
  2. Video Models
  3. Kling AI

v1.6-pro/effects

Previousv1.6-standard/effectsNextv2-master/image-to-video

Last updated 17 days ago

Was this helpful?

This documentation is valid for the following list of our models:

  • klingai/kling-video-v1.6-pro-effects

Model Overview

A specialized video model that generates short clips based on reference images of people, applying one of several preset scenarios: two people hugging, kissing, or making a heart shape with their hands (requires 2 reference images), or a single person being humorously squished like clay or inflated like a balloon (requires 1 reference image).

Generated Video Examples (low-res GIF previews)

Theoretically, you can get acceptable results for the squish and expansion effects even when using photos of animals, inanimate objects, or even landscapes. The output can be unusual, but the video will be generated.

However, if you try to use such photos with the hug, kiss, or heart_gesture effects, you’ll receive an error saying “Could not detect face in the image” .

Processing complete:/n {'id': '50f3e8ae-3d88-482f-95ea-7faa4799f60f:kling-video/v1.6/pro/effects', 'status': 'error', 'error': {'detail': [{'loc': ['body', 'image_url'], 'msg': 'Could not detect face in the image', 'type': 'face_detection_error', 'input': 'https://rgo.ru/upload/s34web.imageadapter/668e00e0ed33855a9c79de12d2f88206/2131465.jpg'}]}}

Setup your API Key

If you don’t have an API key for the AI/ML API yet, feel free to use our .

How to Make a Call

Generating a video using this model involves making two sequential API calls:

  • The first one is for creating and sending a video generation task to the server (returns a generation ID).

  • The second one is for requesting the generated or extended video from the server using the generation ID received from the first endpoint.

Below, you can find both corresponding API schemas.

API Schema

Create a video generation task and send it to the server

Retrieve the generated video from the server

After sending a request for video generation, this task is added to the queue. Based on the service's load, the generation can be completed in seconds or take a bit more.

Code Example (Python)

In this example, we'll try to make two people from different photos (provided via URLs) form a romantic heart shape with their hands. No prompt is required — the effect is selected by setting the appropriate value for the effect_scene parameter.

For effects that involve two images, the first one will always appear on the left side of the video, and the second one on the right. Therefore, to achieve the most natural-looking result, you may sometimes need to swap the image order.

In videos featuring a single person (the squish and expansion effects), an audio track is also generated — a mix of music and material interaction sounds, such as rubber squeaks and similar effects.

Input image preview

The code below creates a video generation task, then automatically polls the server every 10 seconds until it finally receives the video URL.

import requests
import time

base_url = "https://api.aimlapi.com/v2"
api_key = "<YOUR_AIMLAPI_KEY>"

ref_img_url1 = "https://images.pexels.com/photos/733872/pexels-photo-733872.jpeg"
ref_img_url2 = "https://storage.googleapis.com/falserverless/juggernaut_examples/QEW5VrzccxGva7mPfEXjf.png"

# Creating and sending a video generation task to the server
def generate_video():
    url = f"{base_url}/generate/video/kling/generation"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "klingai/kling-video-v1.6-pro-effects",
        "image_url": [ref_img_url1, ref_img_url2],
        "duration": 5,
        "effect_scene": "heart_gesture"        
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        #print(response_data)
        return response_data
    

# Requesting the result of the task from the server using the generation_id
def get_video(gen_id):
    url = f"{base_url}/generate/video/kling/generation"
    params = {
        "generation_id": gen_id,
    }
    
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
    headers = {
        "Authorization": f"Bearer {api_key}", 
        "Content-Type": "application/json"
        }

    response = requests.get(url, params=params, headers=headers)
    return response.json()



def main():
     # Running video generation and getting a task id
    gen_response = generate_video()
    gen_id = gen_response.get("id")
    print("Gen_ID:  ", gen_id)

    # Trying to retrieve the video from the server every 10 sec
    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_video(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")
            print("Status:", status)

            if status == "waiting" or status == "active" or  status == "queued" or status == "generating":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:/n", response_data)
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()
Response
Gen_ID:   91bc5296-f0f9-4336-ab6a-60f993cbb971:kling-video/v1.6/pro/effects
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: generating
Still waiting... Checking again in 10 seconds.
Status: completed
Processing complete:/n {'id': '91bc5296-f0f9-4336-ab6a-60f993cbb971:kling-video/v1.6/pro/effects', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/koala/cSlcOq_yBcuGDvbf9BjGU_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 8554108}}
Generated Video (GIF Preview)
Quickstart guide
Photo #1
Photo #2
"effect_scene": "heart_gesture"
"effect_scene": "hug"
"effect_scene": "kiss"
"effect_scene": "squish"
"effect_scene": "expansion"
"effect_scene": "squish"
"effect_scene": "squish"
Photo #1
Photo #2
post
Authorizations
Body
modelundefined · enumRequiredPossible values:
durationinteger · enumOptional

The duration of the generated video in seconds. Possible values: 5, 10

Possible values:
image_urlany ofRequired

For hug, kiss, and heart_gesture effects, pass an array containing exactly two image URLs. For squish or expansion, only one image URL is required.

string · uriOptional
or
string · uri[]Optional
effect_scenestring · enumRequired

The effect scene to use for the video generation

Possible values:
Responses
201Success
post
POST /v2/generate/video/kling/generation HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 116

{
  "model": "klingai/kling-video-v1.6-pro-effects",
  "duration": 5,
  "image_url": "https://example.com",
  "effect_scene": "hug"
}
201Success

No content

get
Authorizations
Query parameters
generation_idstringRequired
Responses
200Success
get
GET /v2/generate/video/kling/generation HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Accept: */*
200Success

No content

  • Model Overview
  • Setup your API Key
  • How to Make a Call
  • API Schema
  • Create a video generation task and send it to the server
  • POST/v2/generate/video/kling/generation
  • Retrieve the generated video from the server
  • GET/v2/generate/video/kling/generation
  • Code Example (Python)