The Macro: Teachers Are Drowning in Paper
The average high school English teacher grades somewhere between 100 and 150 essays per assignment cycle. Each one takes 10 to 15 minutes to read, annotate, and score thoughtfully. That’s 25 hours of grading for a single assignment. Multiply that across a semester and you’re looking at a teacher who spends more time evaluating writing than actually teaching it.
This isn’t a new problem. It’s a permanent one. Teacher burnout statistics are grim, and grading load is consistently cited as one of the top contributors. The U.S. alone has been dealing with a teacher shortage that the pandemic accelerated but didn’t create. Australia, the UK, and Canada are seeing similar patterns. Teachers aren’t leaving because they hate kids. They’re leaving because the administrative overhead has eaten the job alive.
Automated essay scoring isn’t new either. ETS has been using e-rater for standardized tests since the late 1990s. Turnitin added AI scoring features. Gradescope (owned by Turnitin) handles STEM-heavy assessment well. But these tools were built for standardized contexts, not for a tenth-grade English teacher who wants feedback that actually sounds like it came from someone who read the essay.
The gap is between “machine-scored standardized test” and “useful classroom tool.” Most existing solutions land in the first category. They can tell you if an essay hits a rubric threshold. They can’t tell a student that their third paragraph loses the thread or that their conclusion contradicts their thesis.
The Micro: Teenagers Who Built a Business Before Graduation
Edexia is an AI grading platform that learns individual teacher styles and grades essays across any curriculum, year level, subject, or format. It provides detailed, rubric-aligned feedback, catches plagiarism, and generates cross-submission reports that show student progress over time.
Daniel Gibbon (CEO) and Nathan Wang (CTO) are both second-time founders. Their first EdTech company hit $200K ARR when Daniel was 18 and Nathan was 17. Both maintained perfect GPAs while building it. Daniel is a TEDx speaker on AI in education and ranked in the top 30 statewide upon graduation. Nathan ranked in the top 200.
I’ll be honest: when I see teenage founders, my default setting is skepticism. But $200K in actual revenue at that age is not a school project. These two have been building and selling software for years. They came through Y Combinator’s Winter 2025 batch and are based in San Francisco, though the company started in Brisbane, Australia. The team is four people. Their YC partner is Jared Friedman, which is a strong signal in and of itself.
The accuracy numbers are worth paying attention to. Edexia claims 81.2% exact match with teacher grades and 98.3% within one grade band. Those numbers were calibrated by a team of ten VCAA (Victorian Curriculum and Assessment Authority) assessors, which adds some institutional credibility. If those numbers hold in broader deployment, this is a genuinely useful tool, not a party trick.
Features that stand out: blind grading mode (removes student names to reduce bias), handwriting transcription (scans handwritten essays), and AI detection with writing replay (shows keystroke-level history to verify original authorship). The writing replay feature is particularly clever. Instead of just flagging “this might be AI-generated,” it shows you how the student actually wrote it.
The go-to-market is school partnerships. They had 11 Australian schools signed by Q4 2024 and were arranging 30 more for Q1 2025. Schools like St Bernard’s College and Kolbe Catholic College are listed as customers. The expansion path is obvious: prove it in Australia, then take it to the US and UK markets where the grading burden is identical.
The Verdict
Edexia is solving a real problem with a product that appears to actually work. The accuracy numbers are strong, the feature set is thoughtful, and the founders have more real-world business experience than most people twice their age.
The competitive risk is real, though. Turnitin has enormous distribution in schools and is actively building AI features. If Turnitin ships a good-enough grading tool, Edexia has to compete against an installed base that every teacher already knows. The counter-argument is that Turnitin’s grading features have historically been mediocre, and institutional software tends to stay mediocre because there’s no competitive pressure to improve.
In 30 days, I want to know whether teachers are actually trusting the grades or using Edexia as a first pass and then re-grading everything manually. That’s the difference between a time-saver and a gimmick. In 60 days, how does it handle subjects beyond English? The company claims any curriculum, but essay grading in history or social studies has different rubric patterns. In 90 days, the expansion beyond Australia is the real test. The VCE-specific training is an advantage in Victoria but a limitation everywhere else.