← July 8, 2026 edition

frizzle

AI grading for handwritten math

Frizzle Grades Handwritten Math in Minutes, and Teachers Are Paying Attention

The Macro: Grading Is the Part of Teaching Nobody Talks About

I have watched dozens of AI-in-education startups come and go. Most of them target the flashy stuff. Personalized tutoring. Adaptive learning platforms. AI teaching assistants that generate lesson plans. These are fine ideas. They are also crowded spaces where the incumbents are well-funded and deeply entrenched. Khan Academy has Khanmigo. Duolingo has its AI features. Chegg pivoted hard into AI tutoring. Century Tech and Squirrel AI have been doing adaptive learning for years.

Meanwhile, the most tedious, time-consuming part of a teacher’s job gets almost no attention from the startup world. Grading.

A middle school math teacher with five classes of 30 students generates 150 assignments per homework cycle. Each assignment takes two to four minutes to grade properly, which means handwritten feedback, checking work shown, identifying where the student went wrong, and assigning a score. That is five to ten hours of grading per week. For one subject. The math is brutal and it has been brutal for decades.

The reason AI has not cracked grading is that most student work, especially in math, is handwritten. Typed text is trivial for language models. Handwritten equations with crossed-out steps, arrows, and margin notes are genuinely hard. Optical character recognition for mathematical notation is a different animal than reading a typed paragraph. You need a system that understands both the visual layout of handwritten work and the mathematical reasoning behind it.

This is a large market that is mostly underserved. Gradescope, which Turnitin acquired, handles some automated grading but focuses primarily on higher education and typed or scanned bubble-sheet formats. Edia does AI tutoring for math. Neither is solving the core problem of reading a seventh grader’s pencil-on-paper work and telling the teacher what the kid got wrong and why.

The Micro: A Coinbase PM and a Microsoft ML Engineer Walk Into a Classroom

Frizzle uses AI to grade handwritten math assignments. Teachers upload photos or scans of student work, and the system reads the handwriting, evaluates each step of the solution, and returns grades along with detailed feedback. It processes entire class sets simultaneously, turning hours of grading into minutes.

Abhay Gupta is CEO and Shyam Sai is CTO. They are a two-person team out of San Francisco, part of Y Combinator’s Summer 2025 batch. Abhay comes from product management at Coinbase, where he drove $50 million in revenue, and previously worked at Tesla. Shyam spent time as an ML engineer at Microsoft and holds a patent related to large language models. He co-founded Midwest Math Circle, which tells me the education angle is not accidental. He has a Carnegie Mellon AI and CS background. These are not education outsiders guessing at what teachers need. At least one of them has been in the room.

The product does a few things that matter. First, it accepts multiple correct approaches to the same problem. Math is not always one-right-answer work. A student who solves a quadratic by factoring and a student who uses the quadratic formula are both right, and the system needs to recognize that. Second, it generates student-facing feedback, not just a score. A red X on a wrong answer tells a kid nothing. Step-by-step explanation of where they went off track is what actually drives learning.

Frizzle is also an official White House AI Education Partner, which is a credibility signal that matters in the education procurement world. School administrators are risk-averse buyers. Having that badge on your website helps.

The compliance angle is worth noting. They claim COPPA and FERPA compliance, meaning student data is not being collected beyond what is needed for grading. In edtech, this is table stakes, but a surprising number of startups get it wrong.

Analytics are listed as coming soon, along with integrations for Google Classroom and Canvas. Those integrations will matter a lot. Teachers are not going to adopt a standalone tool if it means adding another platform to their workflow. The product needs to live inside the LMS they already use.

The Verdict

Frizzle is targeting the right problem. Grading is painful, repetitive, and directly competes with the parts of teaching that actually require a human. Every hour a teacher spends grading is an hour they are not spending with students. If Frizzle can reliably read handwritten work and provide accurate, useful feedback, the value proposition is obvious.

The risk is accuracy. Teachers will not tolerate a system that misreads a student’s handwriting and marks a correct answer wrong. That is worse than no automation at all. Handwritten mathematical notation is messy, inconsistent, and context-dependent. A “2” that looks like a “z” or a minus sign that looks like a scratch mark will sink the product if the error rate is too high.

In thirty days, I want to know the accuracy rate on handwritten recognition across different grade levels and handwriting quality. Sixty days, the question is whether Google Classroom integration is live and whether it changes adoption. Ninety days, I want to see retention. Are teachers using it for every assignment or just when they are behind on grading? If it becomes a daily habit, Frizzle has a real business. If it stays an emergency tool for busy weeks, the ceiling is lower. The founding team has the technical depth to make this work. The question is whether the product is good enough right now to earn trust from a profession that has been burned by edtech promises before.