The Macro: Institutional Knowledge Is the Most Expensive Thing Companies Keep Losing
Every company has the same problem. The people who know how things actually work are carrying that knowledge in their heads. When they leave, get promoted, or just go on vacation, the knowledge leaves with them. Playbooks exist in scattered docs. Tribal knowledge lives in Slack threads that nobody will ever search for. Best practices are whatever the last person to touch the project decided to do.
The knowledge management market has been trying to solve this for decades. Confluence, Notion, SharePoint, internal wikis, knowledge bases, onboarding docs. The tools exist. The problem is that nobody maintains them. Documentation degrades the moment it is written, and the effort required to keep it current is exactly the kind of work that falls off every priority list.
Enterprise knowledge management software is projected to be worth over $1.2 trillion by the end of the decade according to some market sizing estimates. That number includes everything from content management to collaboration tools, but the core problem it represents is real: companies are sitting on enormous amounts of institutional knowledge that they cannot access, search, or operationalize.
The AI wave has produced a flood of products in this space. Glean, Guru, Slite, Mem, dozens of others. Most of them take the same approach: connect to your existing tools, index everything, and let people search with natural language. That is useful. It is also just a better search bar. The harder problem is not finding information. It is synthesizing it, learning from it, and using it to actually make decisions and ship work.
Competitors like Glean have raised hundreds of millions of dollars and are focused on enterprise search and knowledge retrieval. Guru and Tettra are aimed at team wikis and internal documentation. What I have not seen many companies attempt is the step beyond retrieval: an AI that does not just find your playbooks but learns them, experiments with them, and executes programs using your existing tools with human approval.
That is what Leeroo says it is building.
The Micro: Three PhDs and a Million Hugging Face Downloads
Leeroo (Y Combinator S25) was co-founded by Majid Yazdani, Alireza Mohammadshahi, and Arshad. The research pedigree here is significant. Majid has a PhD in Machine Learning from EPFL, the Swiss Federal Institute of Technology, and previously served as a staff scientist at Meta AI and LinkedIn AI, plus a VP role at BYJU’s. Alireza also holds a PhD from EPFL in Computer Science and AI, developed LLM evaluation systems at Meta AI, and built compressed multilingual translation models at Naverlabs. His open-source projects have passed one million downloads on Hugging Face. Arshad has an MS from Mumbai University, built recommendation systems at BYJU’s AI Labs, and worked as a deep learning researcher at IIT Bombay.
This is a research-heavy founding team. Two EPFL PhDs and a deep learning researcher. That is unusual for a product company, and it cuts both ways. The technical depth is genuine. The question is whether they can translate research capability into product-market fit at the speed that matters.
The product description from their YC profile says Leeroo “continuously learns org knowledge and expert playbooks, uses experimentation, and with human approval ships data and AI programs on your existing stack.” That is a dense sentence. Let me unpack it.
The “continuously learns” part means the AI is not just indexing your documents once. It is supposed to be updating its understanding as your organization evolves. New processes, new people, new decisions. The “expert playbooks” framing suggests that you can teach the system how your best people approach specific types of problems, and it will replicate that reasoning.
The “experimentation” piece is the most interesting and the most ambitious. The idea that an AI can run experiments within your organization, testing approaches, measuring outcomes, and iterating, is a step beyond what most knowledge management tools are attempting. It turns the AI from a reference tool into an operational one.
The “human approval” qualifier is critical. This is not fully autonomous. It is proposing actions and waiting for a person to say yes. That is the right design choice for enterprise software that touches real workflows, and it is a sign that the founders understand their buyers.
The website itself is minimal. It appears to be primarily JavaScript-rendered with analytics tracking but limited server-side content. That is common for early-stage enterprise products that are selling through demos and direct relationships rather than self-serve signups. They are hiring for a Research Scientist Intern and a Founding Engineer, which tells you something about where they are in the build.
The Verdict
I think the founding team is genuinely strong for this problem. The combination of EPFL research depth, Meta AI production experience, and Hugging Face open-source credibility is the kind of background you want behind a product that claims to learn organizational knowledge and operationalize it.
The ambition is high. “Organizational Superintelligence” is a tagline that either sounds visionary or overcooked depending on your tolerance for big claims. I lean toward giving them the benefit of the doubt because their backgrounds suggest they understand what that would actually require.
The risk is the gap between the pitch and the product. An AI that learns your playbooks, runs experiments, and ships programs is an extraordinary technical challenge. Every enterprise has different tools, different workflows, different definitions of what “good” looks like. Generalizing across all of that is the kind of problem that PhDs spend careers on.
At 30 days, I would want to see a concrete case study. One company, one workflow, measurable before and after. The product description is compelling in the abstract. It needs to be compelling in the specific.
At 60 days, the competitive pressure from Glean and others will be the question. The enterprise knowledge market has a lot of well-funded players, and Leeroo needs to make the case that “learning and executing” is a different category than “searching and retrieving.”
At 90 days, the hiring pace will tell the story. Two open roles for a product this ambitious suggests they are early. Whether they can build fast enough to stay ahead of the hype in this space is the central question.
The research foundation is there. The ambition is there. Now it is about building something that a company can actually deploy and point to when someone asks “what changed.”