The Macro: Mass Tort Litigation Runs on Manual Document Review
I want to explain why class action law is one of the best verticals for AI, and why almost nobody is building for it. A typical mass tort case, the kind where thousands of people sue a pharmaceutical company or a medical device manufacturer, starts with the same painful process. Lawyers need to determine which plaintiffs actually qualify. That means reading medical records, prescription histories, surgical reports, and insurance documents for every single claimant.
A mid-size mass tort might have 5,000 plaintiffs. Each plaintiff might have 200 to 500 pages of records. That is over a million pages of documents that need to be read, categorized, and cross-referenced against specific legal criteria. Does this patient have proof they used the drug during the relevant time period? Do their medical records show the specific injury the lawsuit covers? Is there a pre-existing condition that weakens the claim?
Law firms throw paralegals at this problem. Dozens of them. It takes months. It costs millions. And the accuracy is inconsistent because human reviewers get tired and miss things on page 347 of a medical file. The firms that handle the biggest mass torts, Napoli Shkolnik, Motley Rice, Weitz Luxenberg, they all deal with this bottleneck. It is the unglamorous core of their business.
The legal AI market has attracted a lot of attention, but most of it focuses on the wrong parts of the workflow. Harvey targets corporate law. EvenUp focuses on personal injury demand letters. CaseText, before it was acquired, did legal research. Clio handles practice management. Nobody is focused specifically on the mass tort document review pipeline, which is where the largest volume of documents meets the most repetitive analysis.
The reason this vertical is so good for AI is that the work is structured and high-volume. You are not asking the model to argue a case or predict a verdict. You are asking it to read a document, extract specific data points, and match them against defined criteria. That is exactly the kind of task where AI outperforms humans by orders of magnitude.
The Micro: Two Brothers, Two of the Biggest Law Firms
Kalinda processes medical, pharmaceutical, and product records for class action law firms. It extracts proof of use, proof of injury, and other case-qualifying data points from documents, then aggregates and correlates the results against lawsuit criteria. The system processes thousands of pages in parallel, turning months of paralegal work into hours.
Sayan Bhatia is CEO and Sohil Bhatia is CTO. They are brothers, based in San Francisco, part of Y Combinator’s Summer 2025 batch with a two-person team. Brother co-founder pairs tend to either work exceptionally well or blow up spectacularly, and in legal tech the track record skews positive because trust and communication overhead is low.
The numbers are concrete. Kalinda has processed over 600,000 pages of records to date. They have live deployments with two of the largest plaintiff law firms in the United States. That is not a pilot or a proof of concept. That is production usage with firms that handle billions of dollars in litigation.
The product does something specific and does it well. You give it a stack of medical records and a set of qualifying criteria. It reads every page, identifies relevant data points, and produces a structured report showing which claimants qualify and why. The “why” part matters because attorneys need to cite specific evidence when presenting cases. A summary that says “this patient qualifies” is useless without the page numbers, dates, and medical codes that support the conclusion.
What I find compelling about Kalinda is the narrowness of the focus. They are not trying to be a general-purpose legal AI. They are not building a chatbot that answers legal questions. They are solving one specific, expensive, high-volume problem for one specific type of law firm. That kind of focus tends to produce products that actually work versus products that demo well and disappoint in production.
The Verdict
Kalinda has found a vertical where AI creates genuine, measurable ROI. If a law firm spends $2 million and six months on document review for a mass tort, and Kalinda can do the same work in a week at a fraction of the cost, the sales conversation is short. The value proposition does not require any imagination.
The risk is the sales cycle. Enterprise sales to law firms are notoriously slow. Lawyers are conservative buyers. They need to trust that the AI is not going to miss a qualifying plaintiff, because missed plaintiffs mean lost revenue. The fact that Kalinda already has two major firms in production suggests they have cleared the trust hurdle with at least some buyers.
In thirty days, I want to know whether those two firm deployments are expanding to additional cases or staying limited to a single matter. Sixty days, the question is whether they can close a third major firm and prove the sales process is repeatable. Ninety days, I want to see error rate data. How often does Kalinda miss something a human reviewer would catch? If that number is below human error rates, the product sells itself. If it is above, they have a quality problem that no amount of sales effort can overcome. The fundamentals here are strong. Legal tech is littered with companies that built impressive demos and could not close deals. Kalinda already has deals closed. That is the difference.