All posts

AI Video Creation via MCP: Tellers' Early Bet on the Ecosystem

The Model Context Protocol crossed 97 million installs on March 25, 2026 — the fastest adoption curve of any AI infrastructure standard in history. Every major AI provider now ships MCP support. Tellers is taking its first steps into that ecosystem with an early beta MCP server.

The Tellers MCP server is a JavaScript package you can run locally today. It is not yet integrated into the Tellers CLI — this is a deliberate early release to test our MCP interface, gather developer feedback, and iterate before we ship a fully managed version. If you try it and have thoughts, we would genuinely like to hear them on the GitHub repo.

What MCP Is and Why It Became the Standard

The Model Context Protocol (MCP) is an open protocol, originally released by Anthropic in late 2024 and now managed by the Agentic AI Foundation under the Linux Foundation. It defines how AI agents connect to external tools and services — a universal handshake that lets an AI assistant discover capabilities and call them.

Before MCP, every integration required custom code per tool. Now a single MCP server exposes capabilities that any compatible AI assistant can use: Claude, Cursor, VS Code, Windsurf, and others.

The adoption reflects how quickly it became foundational infrastructure. 97 million installs. 5,800+ community servers in the registry. OpenAI, Google DeepMind, Cohere, and Mistral all integrated MCP support within a 90-day window in early 2026.

Three Ways to Call a Platform: API, Local MCP, and Hosted MCP

One thing that trips up developers encountering MCP for the first time is the difference between the integration modes available. They are not interchangeable — each has different setup requirements, tradeoffs, and use cases.

Direct API You make HTTP requests to the platform’s REST API. You handle authentication, construct payloads, and parse responses yourself. This is the most flexible option and the one Tellers has always supported. It is the right choice when you are building custom automation, scripts, or backends that need fine-grained control.

Local MCP server You run an MCP server process on your own machine. Your AI assistant (Claude Desktop, Cursor, etc.) is configured to connect to it over a local socket or stdio transport. The MCP server translates the assistant’s tool calls into API calls on your behalf. Setup requires running a process locally and editing your assistant’s config file — but once it is running, your AI assistant can invoke the platform’s capabilities directly in conversation.

This is where Tellers is today. The beta MCP server is a JavaScript package you clone and run locally. It is not a one-click install — you need Node.js and a few configuration steps to wire it into your AI assistant.

Hosted MCP server The platform runs the MCP server for you. It shows up in LLM app registries and MCP directories — you enable it with a click, no local process required. This is the smoothest developer experience and the direction the ecosystem is moving toward.

Tellers does not have a hosted MCP server yet, but it is coming. The beta release is our way of stress-testing the MCP interface before we invest in the hosted infrastructure.

What the Tellers MCP Server Does Today

The beta MCP server exposes Tellers’ AI video creation and editing capabilities as tools your AI assistant can call. The practical effect:

  • Conversational video creation: Ask Claude or Cursor to take a script and produce a video — without switching context to the Tellers app
  • Agent-driven production: Wire Tellers into an agentic pipeline where an agent decides what to create, generates it, and handles delivery
  • Integrated editing flows: Orchestrate Tellers alongside version control, CMS updates, and distribution APIs in a single assistant-driven workflow

The full list of available tools is documented in the GitHub repo. Since this is a beta, the interface may change — we are actively optimising the tool structure based on how people use it.

What’s Coming Next

We are working on a hosted MCP server that will be listed on the major LLM app registries and MCP directories. That version will not require any local setup — it will work like any other managed integration.

The beta is an early step in that direction. We wanted to validate the MCP interface and understand real usage patterns before building out the hosted infrastructure. If you use the beta and run into anything — missing tools, awkward interfaces, unclear errors — open an issue or leave feedback on the repo. It directly shapes what gets built next.

Frequently Asked Questions

What AI assistants support MCP? ChatGPT, Codex, Claude, Claude Code, Cursor, VS Code with the MCP extension, Windsurf, and a growing list of others. Any MCP-compatible host can use MCP servers.

Is the Tellers MCP server available in the CLI? Not yet. The current beta is a standalone JavaScript package you run locally — it is not integrated into the Tellers CLI. We’re considering a CLI integration.

What is the difference between the API and the MCP server? The MCP server is a layer on top of that API that speaks the MCP protocol, so your AI assistant can invoke Tellers as a tool without you writing the API calls yourself. Under the hood, the MCP server is still hitting the same API.

What is the difference between a local MCP server and a hosted MCP server? A local MCP server runs as a process on your machine. Your AI assistant connects to it locally. A hosted MCP server is managed by the platform and listed in app registries — you enable it without any local setup. Tellers currently has a local beta; a hosted version is in progress.

What can the Tellers MCP server do? The server exposes Tellers’ AI video creation and editing capabilities as MCP tools. The specific tools available are documented in the GitHub repo.

Is MCP open source? Yes. The MCP specification and SDKs are open source and governed by the Agentic AI Foundation under the Linux Foundation.

How can I give feedback on the Tellers MCP server? Open an issue or leave a comment on the tellers-ai/mcp-server GitHub repo. We are actively looking for feedback on what tools are missing, what the interface should look like, and what workflows people want to build.


Try the beta MCP server or start building with the Tellers API — and let us know what you think.