← June 27, 2026 edition

magnetic

AI Tax Preparer for CPA firms

Magnetic Reads Your Client's Handwritten Notes So You Don't Have To

AITaxAccountingFintech

The Macro: Tax Prep Software Is Frozen in 2005

I want to describe a workflow that happens in every CPA firm in America between January and April. A client drops off a folder. Inside the folder is a W-2, three 1099s, a handwritten note that says “I think I donated $400 to my church,” a spreadsheet with columns that do not match any standard format, and a blurry photo of a receipt. A staff accountant opens UltraTax or Drake or Lacerte, and then spends 45 minutes manually entering every number from every document into the correct field.

That is it. That is the bottleneck. Not analysis, not strategy, not tax planning. Data entry. The most expensive professionals in the firm spend a shocking percentage of their time doing work that should not require a human at all.

The US tax preparation market generates over $14 billion annually. Intuit dominates the consumer side. Thomson Reuters (UltraTax), Drake, and Wolters Kluwer (CCH Axcess) dominate the professional side. These are legacy platforms built in the 1990s and early 2000s. They are powerful, deeply integrated into accounting workflows, and almost universally hated by the people who use them. The interfaces are dense. The data entry is manual. The learning curve is steep.

AI should be the obvious fix here. Scan a document, extract the data, put it in the right field. OCR technology has existed for decades. But tax documents are a special kind of messy. K-1 forms from different brokerages use different layouts. Multi-state returns multiply the complexity. Handwritten notes from clients are exactly as legible as you would expect. Generic OCR chokes on this stuff because the context matters as much as the characters.

A handful of companies are trying to solve this. Fieldguide focuses on audit workflows. SurePrep (now part of Thomson Reuters) does document automation but is tightly coupled to their own ecosystem. Botkeeper handles bookkeeping automation. But the specific problem of scanning arbitrary client documents and entering data into legacy tax software with high accuracy is still largely unsolved.

The Micro: 90% Accuracy Where Competitors Hit 50%

Thomas Shelley and Patrick Fay started Magnetic in San Francisco as part of Y Combinator’s Summer 2025 batch. The team is two people. Shelley previously led product at Keeper, a Y Combinator W19 company, where he built tax automation and OCR systems. Fay was the first engineer at FunCraft, a Zynga studio, and shipped multiple games with millions of downloads.

Shelley’s background at Keeper is directly relevant. He has already built a version of this problem before and presumably learned where the hard parts are. Tax OCR is not just a computer vision challenge. It is a reasoning challenge. The system needs to understand that a number on line 7 of a K-1 means something different than the same number on line 12, and that both of them need to go into different fields in the tax software depending on whether the client is a limited partner or a general partner.

Magnetic claims 90% or better field-level accuracy, compared to roughly 50% from competitors. If that number is real, it is a meaningful gap. At 50% accuracy, you are checking every field anyway, so the automation saves you almost nothing. At 90%, you are spot-checking rather than re-entering, which cuts the time from 45 minutes to something much more manageable.

The product handles handwritten notes, which is bold. Handwriting recognition has improved dramatically with modern vision models, but handwritten tax notes from clients are a special category of chaos. “I think I donated $400” is easy. “Business miles: ~12K??” scrawled in the margin of a gas station receipt is harder.

Magnetic works with UltraTax and Drake, which between them cover a huge portion of the professional tax prep market. The integration approach is interesting because these platforms were not designed to accept AI-driven data input. Magnetic appears to be interacting with them the way a human would, entering data into the fields rather than using an API, because there is no API. That is harder to build but easier to sell, because the CPA firm does not need to change its existing software stack.

The AI agent can also reason over tax code for complex scenarios. Multi-state returns, unusual deduction situations, compliance requirements that vary by jurisdiction. If the agent is actually referencing tax code during data entry and flagging potential issues, that moves it from a data entry tool to something closer to a junior tax preparer.

The Verdict

Magnetic is going after one of the most boring, most valuable problems in professional services. Tax data entry is tedious, expensive, error-prone, and performed millions of times every year by people who are dramatically overqualified to do it. The market is large, the incumbents are slow, and the founding team has direct prior experience building exactly this kind of system.

At 30 days, I want to see real accuracy numbers from CPA firms using it in production, not demo environments. Tax season is a pressure cooker. A tool that works 90% of the time in testing but drops to 70% when confronted with a real client’s disorganized shoebox of receipts is not useful enough to change behavior. At 60 days, I want to know how fast firms can onboard. If setup takes a week and training takes another week, you have missed half of tax season. At 90 days, the question is whether firms are renewing or churning after April 15. Tax prep tools have a built-in seasonality problem. Magnetic needs to demonstrate value year-round or accept that it is a seasonal product.

The 90% accuracy claim is the whole ballgame. If it holds, Magnetic becomes essential software for every CPA firm running UltraTax or Drake. If it does not, it is another OCR tool that looked good in the demo. I lean toward believing it because Shelley built these systems before at Keeper, and second-time founders tend to know where the bodies are buried.