The Macro: AI Video Is Moving Faster Than Anyone Can Keep Up With
The generative video space is in a strange moment. The technology is advancing at a pace that makes last quarter’s outputs look primitive, but the tools for actually using it are scattered across a dozen different platforms, each with its own interface, pricing model, and limitations.
Runway has Gen-3 Alpha. Kling came out of nowhere and produces surprisingly good results. Sora got the hype cycle but shipped late. Hailuo, Veo, Flux. Each model has strengths. Runway is strong on cinematic motion. Kling handles realistic human movement well. Sora generates longer clips with better narrative coherence. But if you want to compare outputs or combine the best of each, you are bouncing between tabs, re-entering prompts, downloading files, and stitching things together in a separate editor.
The market is early and growing fast. Grand View Research estimated the AI video generator market at $554 million in 2023, projecting it past $2.1 billion by 2030. Those numbers might already be conservative given how quickly adoption has accelerated.
For creators making short-form content for TikTok, Reels, and YouTube Shorts, the workflow problem is acute. They need to produce volume. They need it fast. They cannot afford to spend forty minutes per clip bouncing between generation tools and editors. What they want is a single environment where they can generate, edit, and export without context switching.
That is the core thesis behind Prism.
The Micro: One Editor, Six Models, and a Credit System That Actually Makes Sense
Prism is a browser-based video creation platform that bundles access to multiple AI models under one interface. The current model roster includes Veo (from the Deepmind team), Kling, Sora, Hailuo, Flux, and SeedDream. You pick the model, enter a prompt, and generate. If you do not like what Kling produced, you try Veo with the same prompt. Same interface, same project, no tab switching.
But Prism is not just a model aggregator with a dropdown menu. The product includes a timeline editor, storyboard composition, image generation, lip sync tools, and a template library. You can build a scene-by-scene project, generate assets for each scene from different models, and assemble the final cut without leaving the platform. That is a meaningfully different product than “paste a prompt, get a video.”
Rajit Khanna, Alex Liu, and Land Tantichot co-founded the company and went through Y Combinator’s Spring 2025 batch. The team is three people in San Francisco. Khanna handles the product and content side (his X handle is @rajitwrites), Liu is CTO, and Tantichot rounds out the founding team. For a three-person team, the product surface area is ambitious, but the multi-model approach lets them leverage infrastructure others have built rather than training their own models.
The pricing model uses credits where 1 credit equals $0.01 of compute. Costs vary by model. That is a smart structure because it gives users transparent pricing without Prism having to absorb the variance in compute costs across different model providers. A Kling generation might cost different credits than a Sora generation, and the user sees exactly what they are spending.
The use cases break into three buckets: short-form content (memes, skits, viral clips), marketing (UGC-style ads and commercials), and storytelling (series and documentaries). The first bucket is the highest volume. The second is the highest value per customer. The third is the most technically demanding.
The community angle is notable. Prism has a Discord, accounts on Instagram, LinkedIn, YouTube, TikTok, and X. For a video-first product, showing up on video-first platforms is the right distribution strategy. The template library, where users can copy proven video formats, doubles as both a feature and a growth loop.
For context on how the broader AI creative tools space is evolving, the Cardboard approach to agentic video editing tackles the post-production side of the same workflow Prism is automating on the generation side.
The Verdict
Prism is making two bets simultaneously. First, that creators want a unified studio rather than individual model access. Second, that the editor layer on top of generation, storyboards, timeline, lip sync, is what turns a novelty into a workflow tool.
I think both bets are reasonable. The fragmentation in AI video is real and annoying. Anyone who has tried to use three different generation tools for a single project knows the friction. And the history of creative tools shows that the editing environment, not the rendering engine, is usually what earns user loyalty. Premiere won because of the timeline, not because of the codec.
The risk is platform dependency. Prism does not control the models it offers. If Runway changes its API pricing or Sora restricts third-party access, the product has to adapt fast. Model aggregators are powerful when they have leverage and vulnerable when they do not.
At 30 days I would want to see average session length. Are people using the timeline editor or just generating one-off clips? At 60 days, how often users switch between models within a single project. That behavior is the whole thesis. At 90 days, whether any creator or marketing team has made Prism their primary production tool rather than an occasional experiment.
The product is ambitious for a three-person team. The model-agnostic approach is the right call for this moment. Whether the editing layer is deep enough to hold professional users is the question that will determine the ceiling.