OmniHuman 1.5
From a single image and a voice track, this model produces expressive character animations aligned with the speech’s rhythm, intonation, and meaning.
Setup your API Key
If you don’t have an API key for the AI/ML API yet, feel free to use our Quickstart guide.
How to Make a Call
Full Example: Generating and Retrieving the Video From the Server
The code below creates a video generation task, then automatically polls the server every 10 seconds until it finally receives the video URL.
import requests
import time
# replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key
api_key = "<YOUR_AIMLAPI_KEY>"
base_url = "https://api.aimlapi.com/v2"
# Creating and sending a video generation task to the server
def generate_video():
url = f"{base_url}/video/generations"
headers = {
"Authorization": f"Bearer {api_key}",
}
data = {
"model": "bytedance/omnihuman/v1.5",
"image_url": "https://cdn.aimlapi.com/assets/content/office_man.png",
"audio_url": "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3",
}
response = requests.post(url, json=data, headers=headers)
if response.status_code >= 400:
print(f"Error: {response.status_code} - {response.text}")
else:
response_data = response.json()
print(response_data)
return response_data
# Requesting the result of the task from the server using the generation_id
def get_video(gen_id):
url = f"{base_url}/video/generations"
params = {
"generation_id": gen_id,
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}
response = requests.get(url, params=params, headers=headers)
return response.json()
def main():
# Running video generation and getting a task id
gen_response = generate_video()
gen_id = gen_response.get("id")
print("Generation ID: ", gen_id)
# Trying to retrieve the video from the server every 10 sec
if gen_id:
start_time = time.time()
timeout = 600
while time.time() - start_time < timeout:
response_data = get_video(gen_id)
if response_data is None:
print("Error: No response from API")
break
status = response_data.get("status")
print("Status:", status)
if status == "waiting" or status == "active" or status == "queued" or status == "generating":
print("Still waiting... Checking again in 10 seconds.")
time.sleep(10)
else:
print("Processing complete:/n", response_data)
return response_data
print("Timeout reached. Stopping.")
return None
if __name__ == "__main__":
main()const https = require("https");
const { URL } = require("url");
// Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key
const apiKey = "<YOUR_AIMLAPI_KEY>";
const baseUrl = "https://api.aimlapi.com/v2";
// Creating and sending a video generation task to the server
function generateVideo(callback) {
const data = JSON.stringify({
model: "bytedance/omnihuman/v1.5",
image_url: "https://cdn.aimlapi.com/assets/content/office_man.png",
audio_url: "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3",
});
const url = new URL(`${baseUrl}/video/generations`);
const options = {
method: "POST",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json",
"Content-Length": Buffer.byteLength(data),
},
};
const req = https.request(url, options, (res) => {
let body = "";
res.on("data", (chunk) => body += chunk);
res.on("end", () => {
if (res.statusCode >= 400) {
console.error(`Error: ${res.statusCode} - ${body}`);
callback(null);
} else {
const parsed = JSON.parse(body);
callback(parsed);
}
});
});
req.on("error", (err) => console.error("Request error:", err));
req.write(data);
req.end();
}
// Requesting the result of the task from the server using the generation_id
function getVideo(genId, callback) {
const url = new URL(`${baseUrl}/video/generations`);
url.searchParams.append("generation_id", genId);
const options = {
method: "GET",
headers: {
"Authorization": `Bearer ${apiKey}`,
"Content-Type": "application/json",
},
};
const req = https.request(url, options, (res) => {
let body = "";
res.on("data", (chunk) => body += chunk);
res.on("end", () => {
const parsed = JSON.parse(body);
callback(parsed);
});
});
req.on("error", (err) => console.error("Request error:", err));
req.end();
}
// Initiates video generation and checks the status every 10 seconds until completion or timeout
function main() {
generateVideo((genResponse) => {
if (!genResponse || !genResponse.id) {
console.error("Failed to start generation");
return;
}
const genId = genResponse.id;
console.log("Gen_ID:", genId);
const startTime = Date.now();
const timeout = 600000;
const checkStatus = () => {
if (Date.now() - startTime > timeout) {
console.log("Timeout reached. Stopping.");
return;
}
getVideo(genId, (responseData) => {
if (!responseData) {
console.error("Error: No response from API");
return;
}
const status = responseData.status;
console.log("Status:", status);
if (["waiting", "active", "queued", "generating"].includes(status)) {
console.log("Still waiting... Checking again in 10 seconds.");
setTimeout(checkStatus, 10000);
} else {
console.log("Processing complete:\n", responseData);
}
});
};
checkStatus();
});
}
main();Original (1920x1088, with sound):
API Schemas
Create a video generation task and send it to the server
You can create a video with this API by providing a reference image of a character and an audio file. The character will deliver the audio with full lip-sync and natural gestures. This POST request creates and submits a video generation task to the server — and returns a generation ID.
A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video.
The URL of the audio file for lip-sync animation. The model detects spoken parts and syncs the character's mouth to them. Audio must be under 30s long.
No content
POST /v2/video/generations HTTP/1.1
Host: api.aimlapi.com
Content-Type: application/json
Accept: */*
Content-Length: 104
{
"model": "bytedance/omnihuman/v1.5",
"image_url": "https://example.com",
"audio_url": "https://example.com"
}No content
Retrieve the generated video from the server
After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its generation_id, obtained from the endpoint described above. If the video generation task status is complete, the response will include the final result — with the generated video URL and additional metadata.
No content
GET /v2/video/generations?generation_id=text HTTP/1.1
Host: api.aimlapi.com
Accept: */*
No content
Last updated
Was this helpful?