← February 2, 2027 edition

synthetic-sciences

Orchestrate AI co-scientists

Synthetic Sciences Is Building the IDE for AI-Powered Research

The Macro: Researchers Are Drowning in Busy Work

Scientific research has a productivity crisis that has nothing to do with the science itself. Researchers spend enormous amounts of time on tasks that are necessary but not intellectually demanding. Literature reviews that take weeks of reading and synthesizing hundreds of papers. Training ML models that require setting up cloud GPU instances, managing dependencies, babysitting runs, and wrangling results. Writing papers in LaTeX, which is powerful but punishingly tedious for anything beyond simple formatting. Designing experiments based on prior results, which requires holding an absurd amount of context in your head simultaneously.

The tools researchers use have barely evolved in a decade. Jupyter notebooks are great for exploration but terrible for reproducibility. Overleaf improved collaborative LaTeX editing but did not make LaTeX any less painful. Google Scholar and Semantic Scholar help you find papers but do not help you understand them at scale. The whole workflow is a collection of disconnected tools held together by copy-paste and sheer determination.

Several companies are trying to fix parts of this. Elicit focuses on AI-assisted literature review. Consensus does something similar with a more structured approach. Paperpal helps with academic writing. But none of them are trying to build the full-stack research environment where an AI agent can handle the entire workflow from literature review through experiment design to paper writing.

That is what Synthetic Sciences is building. They describe themselves as the infrastructure for AI co-scientists, and their product is an agent platform that can delegate complete research workflows to AI.

The Micro: Four Modes of Research Automation

Aayam Bansal and Ishaan Gangwani founded Synthetic Sciences with a thesis that the Y Combinator-backed company (W25) describes simply: capable AI scientists require a human-centric product suite that generates high-quality process data, paired with research infrastructure to turn that data into increasingly autonomous systems.

The platform offers four distinct modes. Research mode for literature reviews and synthesis. Biology mode for computational biology workflows. Flywheel mode for iterative experimentation. Write mode for generating publication-ready LaTeX. You can switch modes mid-session without losing context, which matters because real research is not linear. You start reviewing literature, pivot to running an experiment, come back to refine your hypothesis, then write up results.

The GPU orchestration feature is notable. Researchers constantly struggle with provisioning cloud compute, managing CUDA dependencies, and keeping track of training runs. If Synthetic Sciences can abstract that into “tell the agent to train this model” and have it handle the infrastructure, that is hours of DevOps work eliminated per experiment.

Pricing starts at $50/month for individuals, which positions this as a tool researchers can expense without needing institutional approval. That is smart distribution. Academic purchasing processes are legendarily slow. Individual researchers buying their own tools is how most academic software gets adopted initially.

The emphasis on “persistent agent runtime and autonomous long-running workflows” suggests the agents can run experiments for hours or days without human intervention, checking back in when they need guidance or have results. This is different from a chatbot that you interact with in real time. It is closer to having a research assistant who goes away, does the work, and comes back with results.

The integration with GitHub and Hugging Face indicates they are building for the computational research community specifically, people who work with code, models, and data. This is not a tool for wet lab biologists or field ecologists. It is for researchers whose work is primarily computational, which is a large and growing segment.

The Verdict

The “Claude Code for Science” positioning in their YC description is bold but directionally right. If they can make AI research agents as productive for scientists as coding agents are becoming for developers, they are building something genuinely important.

At 30 days: how many researchers are actively using the platform beyond demos? $50/month is cheap enough that adoption should be fast if the product delivers.

At 60 days: has anyone published a paper where Synthetic Sciences was a meaningful part of the workflow? Academic credibility matters enormously in this market. One good citation is worth ten marketing pages.

At 90 days: what does the retention curve look like? Research tools have a habit of being exciting for a week and then abandoned. If users come back month after month, the platform is providing durable value.

I think the full-stack approach is the right bet. Solving one piece of the research workflow (just literature review, just writing) creates a nice tool. Solving the whole workflow creates infrastructure. Synthetic Sciences is betting on infrastructure, and if they can deliver, the market is massive.