The Macro: Support Teams Are Drowning in QA They Cannot Do
Every customer support operation has the same dirty secret: they know their knowledge base is wrong and they cannot fix it fast enough. A product ships a new feature on Tuesday. The help articles do not get updated until the following week. In the meantime, support agents are giving inconsistent answers because some of them read the release notes and some of them did not. Customers get frustrated. CSAT scores dip. The VP of Support sends a Slack message asking why.
The quality assurance problem in customer support is structural. Most teams do QA by randomly sampling conversations and having a manager review them. If you have 10,000 conversations a month and your QA team reviews 200, you are looking at 2% coverage. The other 98% of interactions are unmonitored. Bad answers, wrong information, tone violations, missed escalations. They all happen in the dark.
The tools that exist are not solving this well. Klaus (acquired by Zendesk) does conversation review but requires human reviewers to score interactions. MaestroQA automates some scoring but still needs significant manual calibration. Observe.AI handles voice-specific QA with call transcription and scoring. Assembled does workforce management more than quality. And the big helpdesk platforms like Zendesk, Intercom, and Freshdesk have built-in QA features that are universally described as “fine” by the support leaders I talk to, which is a polite way of saying insufficient.
The gap in the market is not monitoring. It is the feedback loop. Existing tools can tell you that Agent Sarah gave a wrong answer on ticket #48291. They cannot automatically update the knowledge base article that Agent Sarah was referencing so that Agent Mike does not give the same wrong answer tomorrow. The detection and the correction are separate workflows, usually handled by different teams, with days or weeks of lag between them.
The Micro: Coinbase AI Meets Amazon Science
Redapto was founded by Anirudh Pupneja, who previously built the generative AI platform at Coinbase, and his co-founder Cheril, who worked on fine-tuning and model compression at Adobe and Amazon Science. They are a two-person team in San Francisco, backed by Y Combinator’s Fall 2025 batch with Garry Tan as their primary partner.
The product monitors customer support interactions across chat, voice, and email. It detects quality issues in real time and then does the thing that nobody else does automatically: it updates the training materials and knowledge bases that caused the problem in the first place.
Think about what that means in practice. A customer asks about a newly changed refund policy. The agent gives the old answer because the knowledge base has not been updated. Redapto flags the interaction, identifies that the knowledge base article is outdated, and either updates it directly or queues the update for human review. The next agent who handles the same question gets the correct information.
That feedback loop is the product. Detection without correction is just an expensive alert system. Correction without detection requires someone to manually identify every knowledge gap. Redapto closes the loop automatically.
The Coinbase background is relevant here. Crypto exchanges have some of the most complex and rapidly changing support environments in tech. Token listings, regulatory changes, fee structure updates, security incident responses. The knowledge base at a major exchange is a living document that changes daily. If Anirudh built the AI platform that handled that complexity at Coinbase, he understands the problem from the inside.
The Amazon Science connection matters for a different reason. Model compression and fine-tuning expertise means the team understands how to build AI systems that run efficiently at scale. Support QA systems need to process thousands of conversations in real time without adding latency to the agent experience. That is an engineering challenge as much as an AI challenge.
The Verdict
Redapto is attacking the right problem in customer support. Detection is solved. Correction is not. The company that closes that loop automatically has a product that every support team with more than 20 agents will want.
The risk is trust. Knowledge base articles are critical documents. If Redapto auto-updates an article incorrectly, it could cause more problems than it solves. The system needs to be right often enough that support leaders trust it to make changes, but it also needs a human review workflow for edge cases. Getting that balance right is the product challenge.
In thirty days, I want to see how they handle the approval workflow. Are knowledge base updates pushed automatically or queued for review? The answer tells you whether the product is truly autonomous or just a suggestion engine with a nice pitch. In sixty days, the metric is error propagation. When Redapto updates an article, does the wrong-answer rate for that topic actually go down? That is the proof that the feedback loop works. In ninety days, I want to see multi-channel coverage. Chat is the easiest channel to analyze because it is already text. Voice requires transcription. Email has different patterns. If Redapto handles all three well, the total addressable market expands significantly.
The founding team has the exact right background for this problem. Coinbase-scale support complexity and Amazon-grade AI engineering. If the product delivers on the automatic correction promise, this is one of the more useful applications of AI in enterprise software. Support teams do not need another dashboard showing them what went wrong. They need a system that fixes it.