minimax-music [legacy]

This documentation is valid for the following list of our models:

  • minimax-music

Model Overview

An advanced AI model that generates diverse high-quality audio compositions by analyzing and reproducing musical patterns, rhythms, and vocal styles from the reference track. Refine the process using a text prompt.

How to Make a Call

Step-by-Step Instructions

1️ Setup You Can’t Skip

▪️ Create an Account: Visit the AI/ML API website and create an account (if you don’t have one yet). ▪️ Generate an API Key: After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI.

2️ Copy the code example

At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment.

3️ Modify the code example

▪️ Replace <YOUR_AIMLAPI_KEY> with your actual AI/ML API key from your account. ▪️ Provide your lyrics via the prompt parameter. The model will use it to generate a song.

▪️ Via reference_audio_url parameter, provide a URL to a reference track from which the model will extract the genre, style, tempo, vocal and instrument timbres, and the overall mood of the piece.

4️ (Optional) Adjust other optional parameters if needed

Only prompt is a required parameter for this model (and we’ve already filled it in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding API schema ("Generate a music sample"), which lists all available parameters along with notes on how to use them.

5️ Run your modified code

Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds 1 minute.

API Schemas

Generate a music sample

This endpoint creates and sends a music generation task to the server — and returns a generation ID and the task status.

post
Authorizations
Body
modelundefined · enumRequiredPossible values:
promptstringRequired
reference_audio_urlstring · uriRequired

Reference song, should contain music and vocals. Must be a .wav or .mp3 file longer than 15 seconds

Responses
default
application/json
post
POST /v2/generate/audio HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Content-Type: application/json
Accept: */*
Content-Length: 85

{
  "model": "minimax-music",
  "prompt": "text",
  "reference_audio_url": "https://example.com"
}
default
{
  "id": "text",
  "status": "queued"
}

Retrieve the generated music sample from the server

After sending a request for music generation, this task is added to the queue. Based on the service's load, the generation can be completed in 50-60 seconds or take a bit more.

get
Authorizations
Query parameters
generation_idstringRequired
Responses
default
application/json
get
GET /v2/generate/audio HTTP/1.1
Host: api.aimlapi.com
Authorization: Bearer <YOUR_AIMLAPI_KEY>
Accept: */*
default
{
  "audio_file": {
    "url": "https://example.com"
  },
  "id": "text",
  "status": "queued",
  "error": null
}

Quick Code Example

Here is an example of generation an audio file based on a sample and a prompt using the music model minimax-music.

Full example explanation

As an example, we will generate a song using the popular minimax-music model from the Chinese company MiniMax. As you can verify in its API Schemas above, this model accepts an audio sample as input—extracting information about its vocals and instruments for use in the generation process—along with a text prompt where we can provide lyrics for our song.

We used a publicly available sample from royalty-free sample database and generated some lyrics in Chat GPT:

Side by side, through thick and thin, With a laugh, we always win. Storms may come, but we stay true, Friends forever—me and you!

To turn this into a model-friendly prompt (as a single string), we added hash symbols and line breaks.

'''\ ##Side by side, through thick and thin, \n\nWith a laugh, we always win. \n\n Storms may come, but we stay true, \n\nFriends forever—me and you!##\ '''

A notable feature of our audio and video models is that uploading the prompt or sample, generating the content, and retrieving the final file from the server are handled through separate API calls. (AIML API tokens are only consumed during the first step—i.e., the actual content generation.)

We’ve written a complete code example that sequentially calls both endpoints — you can view and copy it below. Don’t forget to replace <YOUR_AIMLAPI_KEY> with your actual AIML API Key from your account!

The structure of the code is simple: there are two separate functions for calling each endpoint, and a main function that orchestrates everything.

Execution starts automatically from main(). It first runs the function that creates and sends a music generation task to the server — this is where you pass your prompt describing the desired musical fragment. This function returns a generation ID and the initial task status:

Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'queued'}

This indicates that the file upload and our generation has been queued on the server (which took 7 seconds in our case).

Next, main() launches the second function — the one that checks the task status and, once ready, retrieves the download URL from the server. This second function is called in a loop every 10 seconds.

During execution, you’ll see messages in the output:

  • If the file is not yet ready:

Still waiting... Checking again in 10 seconds.
  • Once the file is ready, a completion message appears with the download info. In our case, after five reruns of the second code block (waiting a total of about 50-60 seconds), we saw the following output:

Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/squirrel/files/koala/Oa2XHFE1hEsUn1qbcAL2s_output.mp3', 'content_type': 'audio/mpeg', 'file_name': 'output.mp3', 'file_size': 1014804}}

As you can see, the 'status' is now 'completed', and further in the output line, we have a URL where the generated audio file can be downloaded.


Listen to the track we generated below the code and response blocks.

import time
import requests

# Insert your AI/ML API key instead of <YOUR_AIMLAPI_KEY>:
aimlapi_key = '<YOUR_AIMLAPI_KEY>'

# Creating and sending an audio generation task to the server (returns a generation ID)
def generate_audio():
    url = "https://api.aimlapi.com/v2/generate/audio"
    payload = {
        "model": "minimax-music",
        "reference_audio_url": 'https://tand-dev.github.io/audio-hosting/spinning-head-271171.mp3',
        "prompt": '''
##Side by side, through thick and thin, \n\nWith a laugh, we always win. \n\n Storms may come, but we stay true, \n\nFriends forever—me and you!##
''', 
    }
    headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"}

    response = requests.post(url, json=payload, headers=headers)

    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print("Generation:", response_data)
        return response_data


# Requesting the result of the generation task from the server using the generation_id:
def retrieve_audio(gen_id):
    url = "https://api.aimlapi.com/v2/generate/audio"
    params = {
        "generation_id": gen_id,
    }
    headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"}

    response = requests.get(url, params=params, headers=headers)
    return response.json()
    
    
# This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 10 seconds:
def main():
    generation_response = generate_audio()
    gen_id = generation_response.get("id")
        
    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = retrieve_audio(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "generating" or status == "queued" or status == "waiting":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Generation complete:/n", response_data)
                return response_data
   
        print("Timeout reached. Stopping.")
        return None    


if __name__ == "__main__":
    main()
Response
Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'queued'}
Still waiting... Checking again in 10 seconds.
Still waiting... Checking again in 10 seconds.
Still waiting... Checking again in 10 seconds.
Still waiting... Checking again in 10 seconds.
Still waiting... Checking again in 10 seconds.
Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/squirrel/files/koala/Oa2XHFE1hEsUn1qbcAL2s_output.mp3', 'content_type': 'audio/mpeg', 'file_name': 'output.mp3', 'file_size': 1014804}}

Listen to the track we generated:

Last updated

Was this helpful?