Tellers for Developers

Video creation
as code.

What Tellers includes for developers: API for programmatic video and AI workflows. In-house player for instant timeline rendering. Open-source timeline format and CLI. AI analysis, indexing, and search on your content. Agent for video editing. Unified abstraction over leading generative video models. No video infrastructure to run.

READ THE DOCS
BOOK A DEMO

Building video features from scratch is a long detour

Video is one of the most infrastructure-heavy features to build. Tellers exists so you don't have to.

🏗

Building video infrastructure takes months

FFmpeg pipelines, transcoding workers, storage, CDN, thumbnail generation, indexing, AI search. Building this from scratch before shipping any feature is a significant detour.

🤖

LLM pipelines need a reliable rendering layer

LLMs can generate video timelines, but they still need a deterministic renderer and a centralized layer to search, store, and manipulate media across mixed sources and codecs. Tellers ships open-source tools (CLI, timeline library) for LLMs to drive timelines, plus an in-house player for instant previews — so your LLM pipeline has something reliable to plug into.

📦

GenAI models fragment your integration surface

Each new generative video model ships its own API, auth, output format, and custom billing and pricing model to reconcile. Maintaining six separate integrations is maintenance overhead that grows with the model landscape.

Build & Integrate

The developer surface area.

An API, a format, and a CLI. That's the entire integration surface — opinionated enough to be productive, open enough to fit any stack.

REST API

Send your media, prompts, scripts, or timelines. Tellers stores, indexes, searches, edits, and renders your videos. Asynchronous by design with webhook callbacks and streamed progress events.

SaaS integrations, content pipelines, LLM-driven video generation

Open-Source Timeline Format

tellers-timeline is an OTIO-based JSON format for describing video compositions, backed by an open-source Rust library. Predictable, version-controllable, and straightforward for LLMs to generate. The same format used internally by the Tellers app.

Programmatic video scripting, LLM-generated timelines, reproducible builds

CLI

The Tellers CLI makes it trivial to upload, analyse, and index your local videos. An even simpler command triggers the video editing agent: `tellers "Create a summary of my last holiday footage"`.

Bulk uploads, local indexing, CI/CD pipelines, one-shot agent commands

Platform Capabilities

Infrastructure you don't have to build.

Beyond rendering, the Tellers platform handles video intelligence, playback, and model orchestration — all available through the same API.

Video Intelligence

Tellers automatically analyses, stores, and indexes every video asset — extracting scenes, transcripts, objects, entities, and semantics. Visual search ("find the scene with the whiteboard"), transcript search ("find the part about pricing"), entity search ("find every clip of the CEO"), and semantic search all work out of the box. No separate search infrastructure to build.

Content discovery, clip retrieval, transcript search, entity search, semantic timeline search

Tellers Player

A proprietary player built for live timeline previews. Streams clips from multiple servers simultaneously, handles mixed codecs and resolutions in a single playback session, and renders HTML and image overlays directly on top of the timeline — including Picture-in-Picture. No pre-transcoding needed before preview.

Live timeline preview, multi-source playback, HTML overlay rendering, PiP

GenAI Model Aggregator

The Tellers API is a zero-overhead abstraction layer over the leading generative video models. Switch models or blend outputs with a single config change — no vendor lock-in, no separate API keys, no per-model integration work.

Veo · Wan · Seedance 2 · Nanobana · P-Video · Mirelo

How the API works

Illustrative — see API reference for the full schema.
POST /users/assets/upload_urls
# Request a presigned upload URL
resp = requests.post(
    f"{BASE}/users/assets/upload_urls",
    headers={"x-api-key": API_KEY},
    json=[{
        "file_type": "video",
        "content_length": os.path.getsize(path),
        "source_file": {"title": "interview.mp4"},
    }],
)
upload = resp.json()[0]

# PUT the file directly to the presigned URL
with open(path, "rb") as f:
    requests.put(upload["presigned_put_url"], data=f)

asset_id = upload["asset_id"]
Model Aggregator

All the major GenAI video models. One API.

Tellers acts as a zero-overhead abstraction layer. Specify the model per render, blend outputs, or let Tellers route based on your requirements. New models are added to the aggregator as they reach production quality — your integration code doesn't change.

  • No separate API keys per model
  • Consistent request/response schema across all models
  • Switch or mix models with a single prompt change

What teams build with Tellers

SaaS video features

Add AI video generation to your product without building or maintaining video infrastructure. Your users get video; you make one API call.

LLM-driven content pipelines

Use an LLM to generate a tellers-timeline JSON from a blog post, transcript, or data feed. POST it to the API. Get a finished video. End-to-end automation.

Media company workflows

Ad hoc integration into production pipelines. Automate upload, synchronisation, and analysis of media so every asset is preprocessed and indexed — ready the moment you ask the agent for an edit.

Frequently asked questions

Common questions about the Tellers API and developer platform.

What is the Tellers API?
The Tellers API is a REST API for programmatic video workflows. You upload media, send prompts, scripts, or timelines, and Tellers stores, indexes, searches, edits, and renders your videos. It handles AI asset generation, captioning, audio sync, transcoding, and delivery. Progress is streamed via server-sent events, and final deliveries are sent through webhook callbacks.
How do I add AI video generation to my SaaS product?
Integrate the Tellers REST API. Upload your users' media (or skip upload entirely and generate from stock footage via a prompt), then request an edit with a prompt or a tellers-timeline. Tellers streams progress, gives you an instant preview link, and renders the final video asynchronously. There is no video infrastructure to provision or manage on your side.
What is the tellers-timeline format?
tellers-timeline is an open-source, OTIO-based JSON format for describing video compositions, backed by an open-source Rust library. It defines scenes, assets, captions, timing, and output parameters in a structured, predictable schema. It is designed to be generated by LLMs, written by hand, or produced programmatically, and it is the same format used internally by the Tellers app.
What is the Tellers video intelligence layer?
Every video processed by Tellers is automatically analysed, stored, and indexed — extracting scenes, transcripts, objects, entities, and semantic metadata. Out of the box you get visual search ("find the scene with the whiteboard"), transcript search ("find the part about pricing"), entity search ("find every clip of the CEO"), and semantic search, without building a separate search or tagging infrastructure.
What makes the Tellers Player different from a standard video player?
The Tellers Player is designed specifically for timeline previews. It streams clips from multiple servers simultaneously, handles mixed codecs and resolutions in a single playback session, and renders HTML elements and image overlays directly on the video — including Picture-in-Picture mode. This enables instant preview of a timeline without pre-transcoding or waiting for a full render.
Which AI video generation models does Tellers support?
Tellers currently supports Veo, Wan, Seedance 2, Nanobana, P-Video (Pruna), and Mirelo through a unified API. You can switch or mix models with a single prompt change — no extra API keys, no per-model integration work, and no custom billing to reconcile. New models are added to the aggregator as they become production-ready.
How does Tellers handle billing across multiple GenAI models?
Tellers absorbs each model's custom billing and pricing model behind a single, unified credit system. You buy credits once and use them across every model the aggregator supports, instead of reconciling a separate invoice, auth flow, and pricing scheme per provider.
How do I upload and index my local media?
Use the Tellers CLI or the /users/assets/upload_urls endpoint. The CLI command `tellers upload /path/to/media_folder` walks a directory, uploads via presigned URLs (multipart above 10 MiB), and triggers automatic analysis and indexing so every asset is immediately searchable and editable by the agent.
Can I trigger the video editing agent from the command line?
Yes. Once your media is uploaded and indexed, a single command triggers the editing agent: `tellers "Create a summary of my last holiday footage"`. The agent plans the edit, assembles a timeline from your indexed assets, and renders the result.
Can I integrate Tellers into a media company production workflow?
Yes. Many teams wire Tellers into existing production pipelines: automated upload and synchronisation of incoming media, preprocessing and indexing in the background, then agent-driven or timeline-driven edits on demand. Because indexing runs continuously, assets are ready the moment an editor or LLM asks for a cut.
Does Tellers support batch video generation?
Yes. The API is stateless per request — you can parallelize as many requests as your plan allows. Batch video generation for content pipelines, daily automation, and multi-asset workflows is a primary use case.
Can I use Tellers with an LLM to automate video creation end-to-end?
Yes. The tellers-timeline format and Rust library make it straightforward for LLMs to generate structured timelines from source content — blog posts, transcripts, data feeds, or prompts. Combined with the CLI and the in-house player, LLMs get open-source tooling to drive timelines and a deterministic renderer to turn them into finished videos, while Tellers handles the centralized layer to search, store, and manipulate the underlying media.

Ready to create your story? Start your adventure today.