← April 23, 2026 edition

careswift

AI scribe for ambulance reports. Under 3 minutes, 80% faster.

CareSwift Cuts Ambulance Report Time by 80%, and EMTs Are Begging for It

HealthcareAIHealth TechEMS

The Macro: EMS Documentation Is Broken in Ways Most People Never Think About

When you call 911 and an ambulance shows up, you probably assume the crew’s job is done once they deliver you to the hospital. It is not. After every single call, EMTs and paramedics have to complete a Patient Care Report (PCR). This document captures everything: patient demographics, chief complaint, vital signs, interventions performed, medications administered, transport decisions, and a narrative section that reads like a medical chart written by someone who just spent 45 minutes doing CPR in the back of a moving vehicle.

These reports are not optional. They are legal documents. They are required for insurance reimbursement. They are used in quality assurance reviews. They are subpoenaed in lawsuits. And they take forever to complete.

The average PCR takes 20 to 45 minutes depending on call complexity, the documentation system being used, and how tired the crew is. In busy urban EMS systems like New York City, where crews might run 8 to 12 calls in a 12-hour shift, documentation can consume hours of time that could be spent responding to the next emergency.

The existing software is awful. The two dominant platforms in the U.S. are ImageTrend and ESO (recently merged). Both are functional but neither is pleasant to use. They are checkbox-heavy, form-driven applications designed to capture data for billing and compliance rather than to make the crew’s life easier. The narrative section, where the EMT describes what happened in free text, is the most time-consuming part and the most important for insurance reimbursement. A poorly written narrative can result in a denied claim, which costs the agency money.

The compliance angle makes this worse. Each state has different documentation requirements. Medicare and Medicaid have their own standards. Private insurers have their own. An EMT who forgets to document medical necessity for transport, or who omits a required vital sign reassessment, can sink a claim worth thousands of dollars.

AI scribes have exploded in healthcare over the past two years. Abridge, Nabla, DeepScribe, and others are targeting physician documentation in clinics and hospitals. But EMS is a completely different environment. The documentation happens in the back of an ambulance, at the hospital, or sometimes hours after the call. The clinical context is acute and chaotic. The existing tools were not built for this workflow.

The Micro: A Working EMT Built the Tool He Needed

CareSwift was founded by Brian Weigand and Jonathan Zero. Brian is the CEO and, critically, he is a working EMT with four years of experience in New York City’s 911 system. He had previously built ambulance reporting software before starting CareSwift, so this is not his first attempt at the problem. Jonathan is the CTO, with a background in scaling secure applications.

They came through Y Combinator’s Summer 2025 batch with Tyler Bosmeny as their primary partner. The team is two people.

The product is an AI assistant that integrates into the ambulance reporting workflow. Instead of staring at a blank narrative box after a call, crews get guided through documentation with the AI surfacing relevant prompts, auto-filling fields where possible, and flagging errors in real time. The company claims reports can be completed in under three minutes, representing an 80% reduction in documentation time.

The real-time error detection is where the product gets interesting. If an EMT documents that they administered a medication but forgot to record the reassessment vitals that Medicare requires, CareSwift catches it before the report is submitted. If the narrative is missing the medical necessity language that ensures the transport claim gets paid, the system flags it. These are the kinds of errors that currently get caught days or weeks later during billing review, when it is too late to fix them from memory.

The founder-market fit here is unusually strong. Brian is not an outsider looking at EMS and thinking “this seems inefficient.” He is a person who runs calls in New York City and knows exactly how painful the paperwork is. He knows which parts of the PCR are genuinely important and which are bureaucratic checkboxes. He knows what the narrative needs to say to get the claim paid. That knowledge is embedded in the product.

The competitive landscape in EMS-specific AI is thin. ImageTrend and ESO have not shipped meaningful AI features. Pulsara does communication between EMS and hospitals but does not touch documentation. There are a few small startups attempting EMS AI, but none with the combination of YC backing and a founder who is actively working in the field.

The Verdict

I think CareSwift is solving a problem that is simultaneously urgent, painful, and underserved. EMS documentation is one of those things that everyone in the industry hates, everyone agrees is broken, and nobody has successfully fixed with technology. The incumbents are too slow, the AI healthcare companies are focused on higher-margin physician workflows, and the people who actually understand EMS are usually too busy running calls to build software.

Brian’s background as a working EMT gives CareSwift an authenticity advantage that is hard to replicate. When he walks into an EMS agency and demonstrates the product, he is not a tech bro pitching to first responders. He is one of them. That matters enormously in a market that is skeptical of outsiders and resistant to change.

The business model question is interesting. EMS agencies operate on thin margins. Municipal services are publicly funded. Private ambulance companies compete on contract pricing. The willingness to pay for documentation software depends heavily on whether CareSwift can demonstrate reduced claim denials and increased reimbursement. If the AI catches errors that would have cost the agency $500 per denied claim, the ROI calculation writes itself.

Thirty days, I want to see how many agencies are in pilot programs and what the crew adoption looks like. Technology that management buys but crews refuse to use is dead on arrival. Sixty days, the question is whether the three-minute completion time holds across different call types, from basic transports to complex cardiac arrests with multiple interventions. Ninety days, I want to see claim denial rates for agencies using CareSwift versus their pre-CareSwift baseline. If the data shows fewer denials, the sales conversation becomes trivially easy.