The Macro: Nobody Does User Research Until It Is Too Late
User research has a participation problem, and I do not mean recruiting participants. I mean that most companies do not do it at all. They know they should. Every product management book says so. Every design thinking workshop emphasizes it. But the reality is that recruiting participants takes weeks, conducting interviews takes days, analyzing transcripts takes more days, and by the time you have actionable insights, the engineering team has already shipped the feature you were trying to validate.
The result is that user research becomes something companies do after launch, if they do it at all. They ship, watch the numbers, and retroactively try to understand why users bounced or got confused. It is the equivalent of reading the crash report instead of doing the safety inspection.
This is not because product teams are lazy. It is because the traditional research process is genuinely slow and expensive. Hiring a UX researcher costs $120,000 or more. Contracting a research firm costs $15,000 to $50,000 per study. Tools like UserTesting and dscout have brought costs down but still require significant setup and manual analysis. The economics push research toward large companies with dedicated research teams, and away from the startups and mid-stage companies that arguably need it most.
The AI-powered research space is starting to heat up. Sprig does in-product surveys with AI analysis. Maze automates usability testing. Dovetail handles research repository and analysis. But most of these tools automate one piece of the workflow. The full pipeline, from recruiting to interviewing to analyzing, remains fragmented.
The Micro: The Full Pipeline in One Product
Stratify was founded by Siddhartha Javvaji (CEO) and Pratham Hombal (CTO) in San Francisco. They are a two-person team from Y Combinator’s Spring 2025 batch.
The pitch is end-to-end automation of user research. Stratify recruits participants, conducts AI-powered interviews, analyzes responses, and delivers actionable insights. The key word is “agentic.” These are not surveys with branching logic. They are AI agents that conduct actual interviews, following up on interesting answers, probing deeper when responses are vague, and adapting the conversation based on what the participant says.
The platform supports multiple input types for testing. You can upload websites, prototypes, images, videos, and ad copy. The AI conducts the research session, captures screen recordings and video, and then processes everything through an analysis engine that produces structured insights.
There is at least one real testimonial on the site. Amogh Chaturvedi, a co-founder at another YC company, says Stratify “runs our customer research on autopilot” and lets them “focus on building features our users actually want.” That is exactly the value proposition, and hearing it from another founder rather than just the marketing copy gives it more weight.
The website is functional with clear product screenshots and a demo flow. There is no public pricing, which at this stage typically means they are doing founder-led sales and customizing deals based on usage. The site has some development artifacts visible (localhost references), which suggests they are iterating quickly and the public site is not always perfectly polished. That is fine for a pre-scale B2B product.
What I find most interesting is the positioning against the status quo rather than against specific competitors. Stratify is not saying “we are better than UserTesting.” They are saying “you should be doing research at all, and we make it possible.” That is a market-expansion play rather than a market-share play, and those tend to be more valuable when they work.
The Verdict
I think Stratify is solving a problem that most product teams have internalized as unsolvable. The “we should do more user research” guilt is universal, and a tool that reduces the time from question to insight from weeks to hours addresses the root cause of why research gets skipped.
The risk is quality. Human researchers are good at reading body language, following hunches, and knowing when a participant is being polite instead of honest. AI interviewers might miss nuance that matters. If the insights Stratify produces are directionally correct but lack depth, teams might make confident decisions based on shallow data, which could be worse than making uncertain decisions based on no data.
Thirty days, I want to see how the AI interviews compare to human-led interviews on the same topics. Side-by-side quality is the only metric that matters at this stage. Sixty days, whether customers are using insights to actually change product decisions or just filing reports. Ninety days, the question is whether Stratify becomes a regular workflow tool or a thing teams try once and forget about. The difference between those outcomes will determine whether this is a real business or a clever demo.