The Macro: Science Has a Participation Problem
The world has millions of smart, curious people who will never contribute to scientific research. The barrier is not intelligence. It is preparation. Becoming a researcher typically requires 12 or more years of education, deep specialization in a narrow field, and access to institutional resources. The knowledge required to even understand the frontier of a discipline takes years to acquire, and by the time you get there, you are trained to see problems through the lens of your specific field.
This creates two problems. First, the pool of active researchers is tiny relative to the number of people who could contribute meaningfully if they had the right tools. Second, the most important scientific breakthroughs often come from cross-disciplinary connections that no single specialist would make. When problems are only examined by people with the same training and the same assumptions, solutions tend to be incremental.
AI changes this equation. AI can synthesize thousands of papers in minutes, surface connections between fields, generate hypotheses, and verify work. If these capabilities are made accessible through a platform, people who are not trained researchers could participate in real scientific discovery.
ScienceSwarm, built by Gikl Inc. and backed by Y Combinator, is building that platform. Their thesis is that AI’s biggest impact will not be personal productivity but accelerating scientific discovery by making research accessible to anyone.
The Micro: A Platform Where AI and Humans Collaborate on Unsolved Problems
ScienceSwarm operates across six research domains: mathematics, biology, physics, computer science, engineering, and chemistry. Users can browse open problems, submit approaches and hypotheses, and work collaboratively with AI agents and other researchers.
The AI handles the preparation phase. Literature synthesis that would take a human researcher weeks, reading and connecting thousands of papers, happens in minutes. The AI identifies research gaps and proposes novel angles. Real-time verification runs in parallel, with both AI and human review processes checking work quality.
The founding team has extraordinary credentials for this product. Peter Vajda was Director of Media Generation at Meta for 11 years, managing generative AI foundation model research, and previously served as Assistant Professor at Stanford. Seiji Yamamoto worked on the Core Llama team at Meta Superintelligence Labs and holds a PhD in Physics. Both come from the intersection of AI research and academic science.
The platform supports both established researchers who want AI augmentation and newcomers who want to contribute to real scientific problems. ScienceSwarm claims to reduce the typical 12+ years of training to meaningful contribution in days or weeks. That is an extraordinary claim that will need validation, but even a partial reduction would expand the research workforce significantly.
The competitive space includes tools like Semantic Scholar and Connected Papers for literature review, Elicit for research assistance, and platforms like ResearchGate for academic collaboration. But none of them combine AI-powered research acceleration with a collaborative problem-solving platform focused on open scientific questions.
The risk is quality control. If anyone can submit hypotheses and approaches, the signal-to-noise ratio could become overwhelming. The verification system needs to be rigorous enough to filter out bad contributions without discouraging legitimate newcomers.
The Verdict
ScienceSwarm is one of the most ambitious products I have encountered. Democratizing scientific research is a massive goal, and the execution challenges are proportionally large.
At 30 days: how many active problems are being worked on, and how many contributors are participating? The density of activity on each problem matters more than the total number of problems or users.
At 60 days: have any contributions from non-traditional researchers led to genuine insights or progress on open problems? Even one validated contribution from a non-expert would prove the concept.
At 90 days: are established researchers using ScienceSwarm to accelerate their own work? Academic adoption would validate the platform’s rigor and attract more serious participants.
I want this to work. The idea that anyone could contribute to solving real scientific problems, supported by AI that handles the preparation and context-building, is genuinely exciting. The founding team has the research credentials to build something legitimate. The question is whether the platform can maintain quality while remaining accessible. That is the hard part.