Video creation
as code.
What Tellers includes for developers: API for programmatic video and AI workflows. In-house player for instant timeline rendering. Open-source timeline format and CLI. AI analysis, indexing, and search on your content. Agent for video editing. Unified abstraction over leading generative video models. No video infrastructure to run.
Building video features from scratch is a long detour
Video is one of the most infrastructure-heavy features to build. Tellers exists so you don't have to.
Building video infrastructure takes months
FFmpeg pipelines, transcoding workers, storage, CDN, thumbnail generation, indexing, AI search. Building this from scratch before shipping any feature is a significant detour.
LLM pipelines need a reliable rendering layer
LLMs can generate video timelines, but they still need a deterministic renderer and a centralized layer to search, store, and manipulate media across mixed sources and codecs. Tellers ships open-source tools (CLI, timeline library) for LLMs to drive timelines, plus an in-house player for instant previews — so your LLM pipeline has something reliable to plug into.
GenAI models fragment your integration surface
Each new generative video model ships its own API, auth, output format, and custom billing and pricing model to reconcile. Maintaining six separate integrations is maintenance overhead that grows with the model landscape.
The developer surface area.
An API, a format, and a CLI. That's the entire integration surface — opinionated enough to be productive, open enough to fit any stack.
REST API
Send your media, prompts, scripts, or timelines. Tellers stores, indexes, searches, edits, and renders your videos. Asynchronous by design with webhook callbacks and streamed progress events.
SaaS integrations, content pipelines, LLM-driven video generation
Open-Source Timeline Format
tellers-timeline is an OTIO-based JSON format for describing video compositions, backed by an open-source Rust library. Predictable, version-controllable, and straightforward for LLMs to generate. The same format used internally by the Tellers app.
Programmatic video scripting, LLM-generated timelines, reproducible builds
CLI
The Tellers CLI makes it trivial to upload, analyse, and index your local videos. An even simpler command triggers the video editing agent: `tellers "Create a summary of my last holiday footage"`.
Bulk uploads, local indexing, CI/CD pipelines, one-shot agent commands
Infrastructure you don't have to build.
Beyond rendering, the Tellers platform handles video intelligence, playback, and model orchestration — all available through the same API.
Video Intelligence
Tellers automatically analyses, stores, and indexes every video asset — extracting scenes, transcripts, objects, entities, and semantics. Visual search ("find the scene with the whiteboard"), transcript search ("find the part about pricing"), entity search ("find every clip of the CEO"), and semantic search all work out of the box. No separate search infrastructure to build.
Content discovery, clip retrieval, transcript search, entity search, semantic timeline search
Tellers Player
A proprietary player built for live timeline previews. Streams clips from multiple servers simultaneously, handles mixed codecs and resolutions in a single playback session, and renders HTML and image overlays directly on top of the timeline — including Picture-in-Picture. No pre-transcoding needed before preview.
Live timeline preview, multi-source playback, HTML overlay rendering, PiP
GenAI Model Aggregator
The Tellers API is a zero-overhead abstraction layer over the leading generative video models. Switch models or blend outputs with a single config change — no vendor lock-in, no separate API keys, no per-model integration work.
Veo · Wan · Seedance 2 · Nanobana · P-Video · Mirelo
How the API works
# Request a presigned upload URL
resp = requests.post(
f"{BASE}/users/assets/upload_urls",
headers={"x-api-key": API_KEY},
json=[{
"file_type": "video",
"content_length": os.path.getsize(path),
"source_file": {"title": "interview.mp4"},
}],
)
upload = resp.json()[0]
# PUT the file directly to the presigned URL
with open(path, "rb") as f:
requests.put(upload["presigned_put_url"], data=f)
asset_id = upload["asset_id"]# Start an agent edit — returns an SSE stream
prompt = (
f"Using asset {asset_id}: cut the best 60s "
"and add a title card at the start."
)
response = requests.get(
f"{BASE}/create",
params={"prompt": prompt},
headers={"x-api-key": API_KEY},
stream=True,
)# Consume the SSE stream: status, preview link, final result
for event in SSEClient(response).events():
if event.event != "tellers.json_result":
continue
data = json.loads(event.data)
print(data["status"], data.get("message"))
if data["status"] == "done":
preview_url = data["preview_url"]
timeline_id = data["timeline_id"]
break# Open the instant preview in the Tellers Player
webbrowser.open(preview_url)
# Trigger the final render when you're happy with the edit
requests.post(
f"{BASE}/renders",
headers={"x-api-key": API_KEY},
json={"timeline_id": timeline_id, "quality": "high"},
)All the major GenAI video models. One API.
Tellers acts as a zero-overhead abstraction layer. Specify the model per render, blend outputs, or let Tellers route based on your requirements. New models are added to the aggregator as they reach production quality — your integration code doesn't change.
- No separate API keys per model
- Consistent request/response schema across all models
- Switch or mix models with a single prompt change
What teams build with Tellers
Add AI video generation to your product without building or maintaining video infrastructure. Your users get video; you make one API call.
Use an LLM to generate a tellers-timeline JSON from a blog post, transcript, or data feed. POST it to the API. Get a finished video. End-to-end automation.
Ad hoc integration into production pipelines. Automate upload, synchronisation, and analysis of media so every asset is preprocessed and indexed — ready the moment you ask the agent for an edit.
Frequently asked questions
Common questions about the Tellers API and developer platform.