CLI
tellers-ai/tellers-cliOpen-source CLI to interact with Tellers from the terminal: search, edit, and generate videos. Built with Rust; API client is generated from our OpenAPI spec.
Installation
Install with Homebrew:
brew tap tellers-ai/tellers
brew install tellers Build from source
Clone the repo and build with Cargo:
# Generate the client crate from OpenAPI
scripts/generate_api.sh
# Build the CLI
cargo build --release Requires Rust and, for client generation, openapi-generator (brew install openapi-generator).
Authentication
Create an API key in the app: go to app.tellers.ai → user menu → API keys → Create new. Then set it:
export TELLERS_API_KEY=sk_...
# Optional — override API base URL
export TELLERS_API_BASE=https://api.tellers.ai All CLI actions consume credits from your account. Make sure you have credits on app.tellers.ai before running commands — new users who sign in with Google SSO get a few free credits to start.
Chat / Prompt
Run a prompt against the Tellers agent:
# Streamed chat with REPL (Ctrl-C to exit)
tellers "Generate a video with cats"
# Single response, no follow-up
tellers --no-interaction "Generate a video with stock footage of cats"
# JSON endpoint: prints only the last tellers.json_result event
tellers --json-response "Generate a video with stock footage of cats"
# Specify model and tools explicitly
tellers --llm-model gpt-5.4-2026-03-05 --tool tool_a --tool tool_b "Your prompt"
# Interactive: configure JSON, no-interaction, tools, and model via TUI
tellers -i "Generate a video with stock footage of cats"
# Background mode: single request, prints response text
tellers --background "Generate a video with cats"
# Full-auto background mode
tellers --full-auto --background "Generate a video with cats" Flags
--no-interaction— Single response only, no REPL.--json-response— Use the JSON endpoint; output is the lasttellers.json_resultevent (implies no interaction).--background— Single request, no REPL; prints the response text (or last JSON result when combined with--json-response).--full-auto— Full-auto behavior; typically combined with--background.--tool <TOOL_ID>— Enable a specific tool (repeat for multiple). Omit to use default tools from your settings.--llm-model <MODEL>— LLM model to use (e.g.gpt-5.4-2026-03-05).-i,--interactive— Guided TUI to configure JSON response, no-interaction, tool selection, and model before sending.
Interactive tool selection
When using -i, a TUI checkbox list lets you pick which tools to enable. Use ↑/↓ to navigate, Space to toggle a tool, a to toggle all, and Enter to confirm. Each tool's checkbox is pre-set from its enabled field in your settings (missing = enabled).
Upload
Upload media files to Tellers:
tellers upload /path/to/media_folder Flags
--disable-description-generation— Disable automatic time-based media descriptions (enabled by default).--dry-run— Analyze files without uploading.--force-upload— Upload even if already uploaded.--local-encoding— Enable local encoding before upload.--parallel-uploads <N>— Parallel uploads (default: 4).--ext <EXT>— Filter by extension (e.g.--ext mp4 --ext mov).--in-app-path <PATH>— In-app path for uploaded files.
Files ≥ 10 MiB use multipart S3 upload (presigned part URLs, then complete); smaller files use a single presigned PUT.
API reference
For full endpoint details, see the API reference.