On April 7, 2026, an unknown model called HappyHorse-1.0 appeared on the Artificial Analysis video leaderboard. Within days it had climbed to #1 in both text-to-video and image-to-video generation, ranked by blind human evaluations. On April 10, Alibaba revealed they built it.
It is one of the more interesting stories in AI video this year — and it tells us something real about where the field is heading.
What Is HappyHorse-1.0?
HappyHorse-1.0 is a video generation model developed by Alibaba’s Taotian Future Life Lab, a team led by Zhang Di, formerly VP of Kuaishou and head of Keling Technology. The team joined Alibaba at the end of 2025. HappyHorse is their first public model.
Key capabilities:
- Text-to-video and image-to-video generation
- Integrated audio synthesis — video and audio are generated in a single pass, not two separate models
- Six-language support: Chinese, English, Japanese, Korean, German, and French
- #1 Elo score in text-to-video (1,333) and image-to-video (1,392) on the Artificial Analysis blind leaderboard as of April 2026
- #2 in both categories when audio is included in the ranking
The deliberate anonymity was a calculated strategy. By entering the leaderboard without branding, the model was judged purely on output quality through blind human tests. That it reached #1 on that basis is a meaningful signal.
Why the Anonymous Launch Is Worth Paying Attention To
Most model releases follow a predictable arc: announcement, curated demos, benchmark claims. HappyHorse reversed this. Blind-test leaderboards are harder to game — real users compare real outputs without knowing which model produced which clip.
When an unbranded model rises to #1 on merit, then turns out to come from a lab with serious engineering resources, it indicates genuine quality improvements rather than marketing lift.
This approach also reflects how competitive AI video has become. OpenAI shut down Sora on March 24, citing high compute costs and a shift toward enterprise clients and LLM training. ByteDance’s Seedance 2.0 was paused due to copyright disputes with major Hollywood studios. Into that gap, Chinese labs are shipping aggressively.
What It Does Not Mean Yet
API access is not available. Alibaba has confirmed HappyHorse-1.0 will be open-sourced, with model weights and a public API planned for April 30. Until then, it remains in benchmark territory — impressive results, no public access.
HappyHorse-1.0 is not currently available on Tellers. Once the API launches and we can evaluate the model in production, we will consider integrating it alongside the video generation models we already support.
What Is Available for AI Video Creation Today
While HappyHorse is a preview of what is coming, there is already a strong set of AI video generation models available on Tellers today:
- Runway Gen 4.5 — added to Tellers on April 6, with camera motion controls and high-fidelity output
- LTX Video — now supports first and last frame control, letting you define where a scene starts and ends
- Kling — consistently strong on human motion, faces, and lip-sync
- Hailuo — cost-effective 1080p generation with reliable quality
Beyond model access, Tellers gives you the ability to combine AI-generated clips with your own footage, edit on a timeline, apply camera motion, and build complete workflows — all via the Tellers API.
FAQ
Is HappyHorse-1.0 open source? Alibaba has confirmed it will be fully open-sourced. Model weights and a public GitHub repository are expected around April 30, 2026.
Can I use HappyHorse-1.0 on Tellers today? Not yet. The public API is not available. Once it is, we will evaluate it for integration.
What AI video models does Tellers currently support? Tellers supports Runway Gen 4.5, LTX Video (with first and last frame control), Kling, Hailuo, and more. The full model list is visible in the app.
What happened to OpenAI’s Sora? OpenAI shut down Sora on March 24, 2026. The compute was redirected toward LLM training and enterprise products.
Why did HappyHorse launch anonymously? Alibaba used an anonymous submission strategy to let the model’s output quality speak for itself in blind human evaluations, without brand recognition influencing the results.
The AI video generation landscape is genuinely competitive right now, and HappyHorse-1.0 is a reminder that the #1 model can change in a week. If you want to work with the best available models today, start creating on Tellers.