← March 25, 2026 edition

labric

The data layer for scientific research

Labric Is Building the Data Plumbing That Science Desperately Needs

The Macro: Science Has a Data Problem Nobody Talks About

There is a dirty secret in scientific research. For all the talk about AI accelerating discovery, most labs are still drowning in data they can barely access. Instruments spit out proprietary file formats. Grad students paste results into spreadsheets by hand. Databases exist in silos that were set up by someone who graduated three years ago and left no documentation. The result is that an enormous amount of experimental data effectively dies the moment it is generated.

This is not a new complaint. Researchers have been grumbling about data management since before “data science” was a job title. What is new is that AI models are now good enough to do genuinely useful things with experimental data, if they can actually get to it. That “if” is doing a lot of heavy lifting.

The existing solutions are a patchwork. Benchling has carved out a strong position in biotech, particularly for sequence data and molecular biology workflows. Dotmatics (formerly Genedata) serves pharma. Electronic lab notebooks like LabArchives handle documentation but not data infrastructure. None of them are really solving the core problem, which is that the data coming off instruments needs to be cleaned, structured, and connected before any analysis tool can touch it.

The market is real. Research data management is growing fast as institutions try to comply with open-data mandates and as AI tools demand structured inputs. But the gap between “we need better data infrastructure” and “someone actually built it” remains wide.

The Micro: Siblings Who Understand the Plumbing

Labric is a two-person company founded by Caitlin Hogan and Connor Hogan. Yes, they are related, and no, the sibling-founder dynamic is not as common as the college-roommate variety. They are based in San Mateo, California, and came through Y Combinator’s Spring 2025 batch.

The product has four layers, and they are exactly what you would expect if you sat down and mapped the problem correctly. First, sync: connect to your data whether it lives in an instrument, a spreadsheet, or a separate database. Second, run: execute workflows and queries triggered by events, so you are not manually babysitting pipelines. Third, structure: organize everything into custom lab databases automatically. Fourth, analyze: use AI agents to query the data and build visualizations.

The website is live and functional, with a clean design that signals they are targeting both enterprise and academic buyers. There are “Start a pilot” and “Get started” buttons, and an embedded video walkthrough. No pricing is listed publicly, which suggests they are in early sales mode and probably pricing per institution or lab group.

What I find interesting is the positioning. They are not building another lab notebook. They are not building an analytics dashboard. They are building the layer underneath both of those things. In software terms, they are the ETL pipeline for science. That is deeply unsexy work, and it is also exactly the kind of infrastructure that becomes indispensable once it is adopted.

The two-person team size means they are early. But data infrastructure companies tend to be small-team affairs in the beginning because the work is technical and the sales cycle is long.

The Verdict

I think Labric is pointed at the right problem at the right time. The convergence of AI capabilities and research data chaos creates a genuine opening for a company that can handle the unglamorous work of data plumbing.

The risk is adoption speed. Researchers are notoriously conservative about changing their workflows, and convincing a lab to let a startup touch their instrument data requires a level of trust that takes time to build. Benchling took years to reach critical mass in biotech, and they had the advantage of targeting a workflow that researchers already recognized as broken.

Thirty days out, I would want to see how many pilot programs they have running. Sixty days, whether those pilots are converting to paid contracts. Ninety days, the question is whether they have found a repeatable sales motion or whether every deal is a custom integration project. The infrastructure play in scientific data is a big opportunity, but the distance between “good idea” and “widely adopted tool” in research settings is measured in years, not months.