The Macro: AI Runs on Human Labor, and Nobody Wants to Talk About It
There is a dirty secret at the center of the AI industry. Every large language model, every computer vision system, every autonomous driving stack depends on massive amounts of human-labeled data. Someone had to look at millions of images and draw bounding boxes around pedestrians. Someone had to read thousands of text passages and rank which AI response was better. Someone had to label medical scans, transcribe audio, classify sentiment, verify facts. The AI does not train itself. Humans train it. Millions of them.
The data annotation market is projected to hit $15 billion by 2028. Scale AI is the dominant player, valued at over $13 billion, handling annotation work for the US military, autonomous vehicle companies, and major AI labs. Labelbox, Appen, Surge AI, and Toloka fill out the landscape. Amazon’s Mechanical Turk still exists but has become a punchline for quality. The market is real, growing fast, and surprisingly dysfunctional.
The dysfunction comes from the labor side. Finding qualified annotators is hard. Vetting them is harder. Managing them at scale is a logistics nightmare. If you need 200 people who can accurately label medical images, you cannot just post on a job board and hope for the best. You need domain experts with verifiable credentials who will show up consistently and maintain quality standards. The recruitment, vetting, and management overhead is enormous, and most AI companies are terrible at it because it is a people operations problem, not a technology problem.
Scale AI solved this by building a massive managed workforce, but their pricing reflects the overhead. Smaller AI companies and research labs cannot afford Scale’s rates and do not need Scale’s volume. They need 20 expert annotators for a six-week project, not a standing army of thousands. The middle of the market, between Mechanical Turk’s chaos and Scale’s enterprise pricing, is underserved.
The Micro: A Second-Time YC Founder Who Knows What Bad Annotation Looks Like
Fixpoint operates as a marketplace that connects AI companies with vetted expert annotators. The platform automates three things that are currently manual and painful: sourcing candidates, vetting their credentials and skills, and managing the ongoing HR logistics. Two products carry the value proposition. A Worker Vetting API that screens applicant backgrounds and catches fraudulent credentials, and a white-glove staffing service that assembles specialized teams of annotators quickly.
Dylan Mikus is a founder, CMU graduate with a background in computer science and machine learning. This is his second time through Y Combinator, which means he has been through the program before and came back with a different company. That pattern is worth noting. Second-time YC founders tend to move faster and waste less time on things that do not matter. Jakub Cichon is the co-founder and CTO, focused on automating the sourcing and vetting pipeline. They are a two-person team out of the Fall 2025 batch, with Tom Blomfield as their YC partner.
The fraud detection angle is genuinely important. The data annotation industry has a massive fraud problem. People fake credentials to get onto annotation platforms, submit low-quality work, and game quality metrics. Fixpoint claims to catch 10x the fraudulent applicants compared to manual review. If that number is real, it solves one of the most expensive problems in the industry. Bad annotators do not just waste money. They corrupt training data, which corrupts models, which causes failures downstream that are expensive and sometimes dangerous.
The specialist categories tell you where the money is: legal, medical, STEM, coding, and linguistics. These are domains where annotation requires genuine expertise, not just the ability to draw boxes on images. A legal annotator needs to understand case law. A medical annotator needs clinical knowledge. A coding annotator needs to actually write and evaluate code. These are high-value workers who command real wages, and the companies hiring them need confidence that the credentials are legitimate.
The competitive positioning against Scale AI is smart. Fixpoint is not trying to be Scale. Scale is a vertical integration play: they own the workforce, the tools, and the customer relationship. Fixpoint is a marketplace play: they connect supply and demand and take a cut. That model works when the supply side is fragmented and hard to discover, which perfectly describes expert annotators. Labelbox and Supervisely focus on the annotation tools themselves, not the labor supply. Appen has the labor pool but has struggled with quality and worker satisfaction. Surge AI focuses on quality but at premium pricing.
The GDPR compliance and SOC 2 certification work suggest they are targeting enterprise customers, which is the right call. Startups will use whatever is cheapest. Enterprises will pay a premium for compliance, reliability, and auditability.
The Verdict
I think Fixpoint is attacking the right layer of the AI stack. The tooling for annotation is mostly solved. The labor logistics are not. If you can reliably source and vet expert annotators at speed, you have a product that every AI company needs and that no existing platform delivers well.
At 30 days, I want to see the fill rate. When a customer requests 50 medical annotators, how quickly does Fixpoint deliver and what percentage of the team meets quality standards? That number defines the business. At 60 days, the question is repeat usage. Do customers come back for their next project or go back to recruiting annotators themselves? At 90 days, I would want to see whether the vetting API is getting adopted as a standalone product. If other annotation platforms integrate Fixpoint’s vetting, that is a platform play that is much bigger than the staffing marketplace alone.
The data annotation market is going to grow for at least the next decade. Models are getting bigger, not smaller. Training data requirements are increasing, not decreasing. RLHF and its successors will keep demanding human judgment. Anyone who can solve the labor supply problem at this layer of the stack is building on solid ground.