Logo
  • Home
  • Blog
  • Developers
  • Pricing
  • FAQ
  • Contact
Sign InSign Up
Logo

Transform your videos with our powerful API. Trim, convert, add subtitles, overlay watermarks, and more. Simple pricing, powerful features.

© Copyright 2026 Video Composer. All Rights Reserved.

Product
  • All Tools
  • Pricing
  • API Docs
  • Docs
Tools
  • MP4 to GIF
  • Video to GIF
  • Video Converter
  • Video Trimmer
  • Video Cutter
  • MP4 Cutter
  • Online Video Cutter
  • Merge Videos
  • Add Watermark to Video
Resources
  • Blog
  • Guides
  • Comparisons
  • Use Cases
  • Alternatives
  • FAQ
  • Convert MP4 to GIF
  • Merge Videos With Audio
  • Make a GIF from Video
  • How to Trim a Video
  • How to Cut MP4
  • Add Logo Watermark Guide
Compare
  • vs Shotstack
  • vs Clideo
  • Clideo Alternative
  • Kapwing Alternative
  • VEED Alternative
  • Social Media Videos
  • Ecommerce Videos
Blog
  • How to Merge Videos Free
  • Free MP4 Cutter Online
  • Merge Videos With Subtitles
  • Best Free Video Mergers 2026
  • Video Processing API Guide
  • Add Watermark via API
  • Add Subtitles via API
  • Batch Process Videos API
About
  • Contact
Legal
  • Terms of Service
  • Privacy Policy
  • Cookie Policy

How to Batch Process Videos Using an API — Automate at Scale

Mar 9, 2026

Learn how to batch process videos programmatically using the VideoComposer API. Code examples in curl, Node.js, and Python for automated bulk video processing.

Cover Image for How to Batch Process Videos Using an API — Automate at Scale

How Do You Batch Process Videos Using an API?

You can batch process videos programmatically using the VideoComposer API's /api/v1/compose endpoint with the async=true parameter. Submit multiple jobs concurrently, receive a job_id for each, then poll for completion or use webhooks. This guide covers the full pipeline in curl, Node.js, and Python. Full API documentation is at /developers.


Why Does Batch Video Processing Matter?

Processing videos one at a time is not sustainable at scale. Whether you run an e-commerce store updating hundreds of product videos every season, a social media agency producing content for dozens of clients, or a content platform converting uploads to multiple formats — doing it manually wastes time and introduces inconsistency.

Key reasons teams adopt batch video processing APIs:

  • Time savings — processing 100 videos manually takes hours or days. An API pipeline handles the same workload in minutes while your team does other work
  • Consistency — every video gets the exact same watermark placement, subtitle style, and output format. No variation from person to person or run to run
  • Scalability — the same code that processes 10 videos handles 10,000. You do not need to add headcount as volume grows
  • Automation — trigger processing on upload events, schedule overnight runs, or integrate into your existing CI/CD pipeline

For one-off edits, the browser-based compose tool is faster. For recurring workflows at volume, the API is the only practical approach.


What is Batch Video Processing?

Batch video processing is the automated handling of multiple video files in a single pipeline run, without manual intervention for each file. Instead of opening a video editor, applying an operation, and exporting — you write code that submits all files to an API, waits for results, and downloads the processed outputs.

Common batch processing use cases include:

  • Product videos — apply a consistent watermark and format conversion to every product clip in a catalog
  • Social media content — add captions, trim intros, and convert to platform-specific aspect ratios for an entire content library
  • Format conversion at scale — convert a library of MP4 files to WebM for web delivery or vice versa for broadcast
  • Brand refreshes — update the watermark or intro/outro on hundreds of existing videos when brand guidelines change
  • Accessibility compliance — add subtitles to a backlog of uncaptioned videos across a platform

For detailed documentation on the API endpoints used in these workflows, see the video processing API developer guide.


Setting Up Your Batch Processing Pipeline

Authentication

All API requests require an Authorization header with your API key as a Bearer token. Store your key in an environment variable — never hardcode it in source files.

export VIDEOCOMPOSER_API_KEY="your_api_key_here"

Generate your API key from the developers page. Your key is the only credential you need — no SDK installation required.

Submitting Batch Jobs With curl

Use async=true to submit jobs without waiting for processing to complete. The API returns a job_id immediately so you can submit the next file right away.

# Submit a batch job asynchronously
curl -X POST https://videocomposer.io/api/v1/compose \
  -H "Authorization: Bearer $VIDEOCOMPOSER_API_KEY" \
  -F "file=@product-video-001.mp4" \
  -F "operation=watermark" \
  -F "watermark_url=https://example.com/logo.png" \
  -F "output_format=mp4" \
  -F "async=true"

# Response: {"job_id": "job_abc123", "status": "queued"}

# Poll for job status
curl -X GET https://videocomposer.io/api/v1/jobs/job_abc123 \
  -H "Authorization: Bearer $VIDEOCOMPOSER_API_KEY"

# Response: {"job_id": "job_abc123", "status": "complete", "download_url": "..."}

Node.js Batch Processing Implementation

This implementation submits all jobs concurrently and polls for completion in parallel. It handles errors per-job so a single failure does not stop the entire batch.

import { readFileSync, readdirSync } from 'fs';
import path from 'path';
import FormData from 'form-data';

const API_KEY = process.env.VIDEOCOMPOSER_API_KEY;
const BASE_URL = 'https://videocomposer.io/api/v1';

async function submitBatchJob(videoPath, operation, options = {}) {
  const form = new FormData();
  form.append('file', readFileSync(videoPath), { filename: path.basename(videoPath) });
  form.append('operation', operation);
  form.append('output_format', 'mp4');
  form.append('async', 'true');

  // Attach any operation-specific options (watermark_url, subtitle file, etc.)
  for (const [key, value] of Object.entries(options)) {
    form.append(key, value);
  }

  const response = await fetch(`${BASE_URL}/compose`, {
    method: 'POST',
    headers: {
      Authorization: `Bearer ${API_KEY}`,
      ...form.getHeaders(),
    },
    body: form,
  });

  if (!response.ok) {
    const error = await response.json();
    throw new Error(`Submit failed for ${path.basename(videoPath)}: ${error.message}`);
  }

  const result = await response.json();
  return result.job_id;
}

async function pollJobStatus(jobId, intervalMs = 5000) {
  const url = `${BASE_URL}/jobs/${jobId}`;

  while (true) {
    const response = await fetch(url, {
      headers: { Authorization: `Bearer ${API_KEY}` },
    });
    const job = await response.json();

    if (job.status === 'complete') return job.download_url;
    if (job.status === 'failed') throw new Error(`Job ${jobId} failed: ${job.error}`);

    await new Promise((resolve) => setTimeout(resolve, intervalMs));
  }
}

async function batchProcessVideos(videoDir, operation, options = {}) {
  const videoFiles = readdirSync(videoDir)
    .filter((f) => f.endsWith('.mp4'))
    .map((f) => path.join(videoDir, f));

  console.log(`Submitting ${videoFiles.length} jobs for operation: ${operation}`);

  // Submit all jobs concurrently — do not await each one
  const jobIds = await Promise.all(
    videoFiles.map((filePath) => submitBatchJob(filePath, operation, options))
  );

  console.log(`All jobs queued. Polling for completion...`);

  // Poll all jobs in parallel
  const results = await Promise.allSettled(jobIds.map(pollJobStatus));

  const successful = results.filter((r) => r.status === 'fulfilled');
  const failed = results.filter((r) => r.status === 'rejected');

  console.log(`Complete: ${successful.length} succeeded, ${failed.length} failed`);
  return { successful: successful.map((r) => r.value), failed };
}

// Example: watermark all MP4 files in a directory
batchProcessVideos('./videos', 'watermark', {
  watermark_url: 'https://example.com/brand-logo.png',
  watermark_position: 'bottom-right',
});

Python Batch Processing Implementation

This Python implementation uses concurrent.futures for parallel job submission and polling.

import os
import time
import requests
from pathlib import Path
from concurrent.futures import ThreadPoolExecutor, as_completed

API_KEY = os.environ.get("VIDEOCOMPOSER_API_KEY")
BASE_URL = "https://videocomposer.io/api/v1"

def submit_job(video_path: str, operation: str, options: dict = {}) -> str:
    headers = {"Authorization": f"Bearer {API_KEY}"}

    with open(video_path, "rb") as f:
        files = {"file": (Path(video_path).name, f, "video/mp4")}
        data = {"operation": operation, "output_format": "mp4", "async": "true", **options}
        response = requests.post(f"{BASE_URL}/compose", headers=headers, files=files, data=data)

    response.raise_for_status()
    return response.json()["job_id"]

def poll_job(job_id: str, interval_sec: int = 5) -> str:
    url = f"{BASE_URL}/jobs/{job_id}"
    headers = {"Authorization": f"Bearer {API_KEY}"}

    while True:
        response = requests.get(url, headers=headers)
        response.raise_for_status()
        job = response.json()

        if job["status"] == "complete":
            return job["download_url"]
        if job["status"] == "failed":
            raise RuntimeError(f"Job {job_id} failed: {job.get('error')}")

        time.sleep(interval_sec)

def batch_process_videos(video_dir: str, operation: str, options: dict = {}) -> dict:
    video_files = list(Path(video_dir).glob("*.mp4"))
    print(f"Submitting {len(video_files)} jobs for operation: {operation}")

    results = {"successful": [], "failed": []}

    # Submit all jobs with up to 10 concurrent threads
    with ThreadPoolExecutor(max_workers=10) as executor:
        submit_futures = {
            executor.submit(submit_job, str(f), operation, options): str(f)
            for f in video_files
        }

        job_ids = {}
        for future in as_completed(submit_futures):
            file_path = submit_futures[future]
            try:
                job_id = future.result()
                job_ids[job_id] = file_path
            except Exception as e:
                results["failed"].append({"file": file_path, "error": str(e)})

    # Poll all submitted jobs concurrently
    with ThreadPoolExecutor(max_workers=10) as executor:
        poll_futures = {executor.submit(poll_job, job_id): job_id for job_id in job_ids}

        for future in as_completed(poll_futures):
            job_id = poll_futures[future]
            try:
                download_url = future.result()
                results["successful"].append({"job_id": job_id, "url": download_url})
            except Exception as e:
                results["failed"].append({"job_id": job_id, "error": str(e)})

    print(f"Complete: {len(results['successful'])} succeeded, {len(results['failed'])} failed")
    return results


# Example: add watermark to all videos in a directory
batch_process_videos(
    "./product-videos",
    "watermark",
    {"watermark_url": "https://example.com/logo.png", "watermark_position": "bottom-right"},
)

Which Operations Support Batch Processing?

All major VideoComposer API operations support asynchronous batch submission via async=true. The table below shows which operations are available and typical batch use cases.

OperationBatch SupportTypical Use Case
MergeYesCombine product clips with intro/outro at scale
TrimYesCut intros and outros from a video library
ConvertYesFormat conversion (MP4 to WebM) for entire catalog
WatermarkYesBrand all videos at once with logo overlay
SubtitlesYesAdd captions to a video library for accessibility

For individual operation references, see the dedicated guides: add watermark to video via API, add subtitles to video via API. You can also use the trim tool and convert tool directly in your browser for one-off operations.


Performance Tips for Batch Processing

Concurrency Limits

The API enforces concurrent job limits per account plan. Check your plan limits on the pricing page before designing your pipeline. If you exceed the limit, requests return a 429 Too Many Requests response.

A safe default is to cap concurrent submissions at 10 and increase based on observed throughput. The Node.js and Python examples above use Promise.all and ThreadPoolExecutor(max_workers=10) respectively as starting points.

Webhook Callbacks vs Polling

Polling is simple but generates unnecessary requests when processing times are long. For batches over 50 videos, use webhook callbacks instead:

  1. Set the webhook_url parameter on each job submission to your endpoint
  2. The API sends a POST request to your URL when the job completes or fails
  3. Your server processes completions asynchronously without a polling loop

Polling is appropriate for development and small batches. Webhooks are the right choice for production pipelines with high volume.

Retry Strategies

Use exponential backoff when polling or retrying failed submissions. Start with a 5-second interval and double on each retry up to a 60-second maximum. Set a maximum retry count (5 is a reasonable default) to prevent infinite loops on permanently failed jobs.

File Size Optimization

Compress input videos before submission to reduce upload time and processing duration. For batch jobs, a 30–50% reduction in input file size often cuts total pipeline time by a similar margin. The API supports all standard MP4 variants — use H.264 encoding for widest compatibility.


Batch API vs Manual Editing: Cost Comparison

The cost advantage of API-based batch processing compounds with volume. The estimates below use typical market rates for contract video editors and VideoComposer API pricing.

Method100 Videos1,000 VideosEstimated Time
Manual editing$500+$5,000+Weeks
VideoComposer API~$10~$50Hours
Other batch APIs$50–200$500–2,000Hours

Manual editing costs reflect contractor or agency rates at $5–$10 per video for simple operations. VideoComposer API pricing is based on per-operation charges — see the pricing page for current rates. Other batch APIs include cloud video processing services with higher per-operation costs.

At 1,000 videos, the API approach is 100x cheaper than manual editing and costs significantly less than most competing batch video services.


Error Handling and Retry Logic

Common Errors

Error CodeMeaningRecommended Action
400 Bad RequestInvalid parameters or unsupported file formatCheck parameter names and video format compatibility
401 UnauthorizedMissing or invalid API keyVerify VIDEOCOMPOSER_API_KEY environment variable is set correctly
429 Too Many RequestsConcurrent job limit exceededReduce concurrency, add exponential backoff retry
500 Internal Server ErrorServer-side processing failureRetry with exponential backoff; check job error field

Idempotency Keys

For batch operations where retries could cause duplicate processing, submit an idempotency_key parameter with each request. Use a deterministic key based on the input file hash and operation name — this ensures retrying a failed request does not create duplicate jobs.

import { createHash } from 'crypto';

function getIdempotencyKey(filePath, operation) {
  const fileContent = readFileSync(filePath);
  const hash = createHash('sha256').update(fileContent).digest('hex').slice(0, 16);
  return `${operation}-${hash}`;
}

// Pass as a parameter when submitting the job
form.append('idempotency_key', getIdempotencyKey(videoPath, 'watermark'));

Per-Job Error Isolation

Always use Promise.allSettled (Node.js) or equivalent per-job error handling rather than Promise.all. A single failed job should not stop the rest of the batch. Log failures with their job IDs and file names so you can resubmit only the failed subset.


Frequently Asked Questions

What is batch video processing?

Batch video processing is automated handling of multiple video files in a single pipeline run. You submit all files to an API, specify the operation (trim, watermark, convert, etc.), and the API processes each file and returns download URLs. No manual intervention is required for each video.

How many videos can I process at once?

The number of concurrent jobs depends on your plan. You can submit as many jobs as your concurrent limit allows — additional submissions are queued and processed as capacity becomes available. Check the pricing page for concurrency limits on each plan.

How long does batch processing take?

Processing time depends on video duration, file size, and the operation type. Simple operations like watermarking or format conversion typically complete in under a minute per video. Complex operations like subtitle burning may take longer. With concurrent submission, a batch of 100 short videos often completes in under 10 minutes.

Can I mix different operations in one batch?

Yes. You can submit a watermark job for some files and a subtitle job for others in the same pipeline run. Each job submission is independent — specify the operation parameter per request. There is no requirement that all jobs in a batch use the same operation.

What happens if one video in a batch fails?

A failed job does not affect other jobs in the batch. Each job has its own job_id and status. Use Promise.allSettled in Node.js or per-future error handling in Python to capture failures independently. The failed job returns a status: failed response with an error field describing the cause.

Is there a free tier for batch processing?

The browser-based compose tool is available free without an API key — it is the right choice for one-off operations. Programmatic API access, including batch processing, is available on paid plans. See /pricing for plan details, concurrent job limits, and pricing per operation.


Get Started With Batch Video Processing

The VideoComposer API makes batch video processing straightforward: submit jobs asynchronously with async=true, poll for completion or use webhooks, and download results. The same code handles 10 videos or 10,000.

  • API documentation — get your API key and full endpoint reference at /developers
  • Pricing and rate limits — concurrent job limits and per-operation pricing at /pricing
  • No-code option — process individual videos in the browser at /tools/compose
  • Trim videos — the trim tool and trim API handle cutting operations in batch
  • Convert formats — the convert tool and convert API handle format conversion at scale
  • Related guides — for operation-specific details, see the video processing API developer guide, add watermark via API, and add subtitles via API