The Macro: Market Research Is Expensive, Slow, and About to Get Disrupted
Traditional market research is a $80 billion industry built on a painfully slow feedback loop. You want to know what customers think about a pricing change? That will be a two-week survey project, a panel recruitment fee, a data cleaning phase, and an analysis period. Total cost: somewhere between $15,000 and $100,000 depending on sample size and methodology. Total time: four to eight weeks. By the time you get the results, your competitor already shipped the feature you were testing.
The research industry knows this is a problem. Qualtrics, SurveyMonkey, and Typeform have all been trying to make surveys faster and cheaper for years. Panel companies like Prolific, Dynata, and Cint compete on speed of recruitment. But the fundamental bottleneck remains: you need actual humans to take the survey, and humans are slow, expensive, and increasingly unreliable as respondents. Survey fatigue is real. Response rates have been declining for decades. The people who do respond are increasingly unrepresentative of the general population.
Enter the idea that has been quietly gaining traction in academic circles for the past two years: what if you replaced human respondents with AI agents that simulate human behavior? Not as a substitute for all research, but as a rapid prototyping tool. Run your survey on 10,000 AI agents with different demographic profiles in an hour. Use the results to refine your questions, identify interesting segments, and generate hypotheses. Then validate the most important findings with a smaller, targeted human sample.
This is not science fiction. Multiple academic papers have shown that large language models can simulate survey responses with surprising accuracy across a range of demographic and psychographic dimensions. The correlation between AI-simulated responses and actual human responses on well-studied topics is often above 0.85. That is not perfect, but it is good enough to be useful as a directional tool.
The Micro: An Open-Source Framework From a Husband-and-Wife Team at YC
Expected Parrot came out of Y Combinator’s Fall 2025 batch with Robin Horton as CEO and John Horton as CTO. They are based in Cambridge, Massachusetts, and the team is five people. Their YC partner is Pete Koomen.
The core product is EDSL, which stands for Expected Parrot Domain-Specific Language. It is an open-source Python framework (MIT licensed) that lets researchers design and run surveys and experiments with AI agents and large language models. The framework supports multiple question types: multiple choice, free text, linear scales, and lists. You can parameterize prompts using scenarios built from CSV, PDF, PNG, and other data sources. You can design AI agent personas with customizable traits to simulate diverse respondent populations. And you can run surveys across multiple LLM providers simultaneously.
That last part is important. Running the same survey on GPT-4, Claude, Gemini, and Llama and comparing results gives you a built-in robustness check. If all four models produce similar response patterns, you have higher confidence in the signal. If they diverge wildly, that tells you the question is sensitive to model architecture and you should be more cautious about the results.
The framework includes automatic caching of LLM responses, which means you can reproduce results without paying for the same API calls twice. There is a collaboration platform called Coop for sharing workflows and results with other researchers. And the deployment is flexible: you can run everything locally or on Expected Parrot’s servers.
John Horton’s background is worth noting. If this is the same John Horton who is a professor at MIT Sloan and has published extensively on the economics of online labor markets, then the academic credibility of this product is substantial. His research on using LLMs to simulate economic experiments has been influential in the field. That would make Expected Parrot a case of an academic researcher commercializing their own published work, which tends to produce more technically sound products than the average startup.
The competitive landscape is thin but growing. Synthetic Users is another YC company working on AI-simulated user research. UserTesting has been adding AI features. But most of the market research industry has not seriously engaged with AI simulation yet, which gives Expected Parrot a window to establish the tooling standard.
The Verdict
I think Expected Parrot is building something genuinely new. This is not AI applied to an existing workflow. This is AI creating a workflow that did not previously exist. The ability to run thousands of simulated surveys in minutes changes the research cycle from “expensive and slow” to “cheap and fast,” which changes how companies make decisions.
At 30 days, I want to see case studies showing where AI-simulated results matched subsequent human validation. The academic literature is promising, but commercial applications need their own proof points.
At 60 days, the Coop platform could become a moat. If researchers start sharing and building on each other’s AI survey workflows, the network effects could make Expected Parrot the GitHub of simulated research. That is a much bigger business than just selling a framework.
At 90 days, I want to know how the market research incumbents are responding. If Qualtrics or SurveyMonkey acquires or copies this approach, the competitive dynamics shift dramatically. First-mover advantage in research tooling does not last long unless you build a community around it.
The open-source strategy is smart. It builds trust with the academic community, which is the natural early adopter for this technology. Academics are skeptical by training. Letting them inspect the code and reproduce results addresses that skepticism directly. And once academic departments standardize on EDSL, their graduate students take that tool preference into industry with them. That is a distribution strategy that plays out over years, not quarters. I like the patience of it.