The Macro: The Documentation Problem That Never Dies
Every engineering team I’ve ever talked to has the same complaint: the docs are out of date. The wiki is a graveyard. The onboarding guide references a codebase that was refactored six months ago. Tribal knowledge lives in the heads of three senior engineers, and when one of them leaves, a chunk of institutional memory walks out the door.
This problem has spawned an entire category of tools. Notion, Confluence, GitBook, Slite, Almanac, Tettra, Guru. All of them require someone to actually write and maintain the documentation. That’s the fundamental issue. The tools are fine. The problem is that documentation is a chore that engineers hate, and no amount of WYSIWYG polish changes that.
The AI angle here is obvious: what if the documentation wrote itself? What if a system could watch your team’s actual work (the Slack conversations, the pull requests, the Jira tickets, the Notion pages) and synthesize that into a living knowledge base that stays current without anyone lifting a finger?
Confluence has started adding AI features. Notion has Notion AI. Guru has been doing AI-assisted knowledge management for a while. But these are all bolt-ons to existing products. The question is whether a purpose-built AI knowledge platform can do this significantly better than the incumbents adding AI to their existing products.
The Micro: Two Aroras Building the Self-Writing Wiki
Peppr AI is built by Sachitt Arora and Nitya Arora. They’re a four-person team out of YC’s Winter 2025 batch, working with Harj Taggar. The product connects to your engineering workflows (Slack, Notion, Jira) and autonomously captures insights, generating internal documentation and providing real-time answers to questions about how your company actually works.
The “self-improving” part of the pitch is the key differentiator. This isn’t a search tool that indexes your existing docs. It’s a system that creates new documentation by observing your team’s behavior. When someone explains a deployment process in a Slack thread, Peppr is supposed to capture that and turn it into a proper runbook. When a Jira ticket reveals a decision about architecture, that gets logged as an architectural decision record.
They’re hiring aggressively for a four-person startup: backend engineer, AI developer, and frontend/full-stack engineer, all at $100K-$150K with 0.25%-1.00% equity. That suggests they have funding and urgency, which makes sense for a YC W25 company trying to build product fast.
The real-time intelligence positioning is interesting. Their website leads with “Real-Time Sales Intelligence” as a tagline, which suggests they might be pivoting or expanding beyond pure engineering documentation into sales enablement. That’s a different market with different buyers and different competitors (Gong, Chorus, Clari). If they’re trying to do both, that’s a red flag at this stage. If they’ve deliberately chosen sales intelligence over engineering docs, that’s a pivot worth understanding.
The Verdict
I want to like Peppr AI more than I currently do. The problem is real and painful. Any engineer who’s spent 45 minutes hunting through Slack threads to find out why a particular API endpoint was built the way it was understands the value proposition immediately.
But knowledge management is where startups go to die. The category has a brutal track record. Guru raised $65 million and still struggles with adoption. Tettra pivoted multiple times. Notion won the category mostly by being a better general-purpose tool, not by solving the documentation problem specifically. The graveyard is full of companies that had the right thesis and couldn’t crack distribution or retention.
The self-improving angle is the right bet. If Peppr can genuinely create useful documentation without anyone doing extra work, that solves the core adoption problem that killed its predecessors. But “autonomous documentation generation” is a high bar. If the AI-generated docs are wrong or shallow, people stop trusting them fast, and once trust is lost in a knowledge base, it’s almost impossible to rebuild.
In 30 days, I want to see examples of documentation Peppr created autonomously that a team actually used. Not demo content. Real docs that answered real questions. In 60 days, the question is accuracy: what percentage of AI-generated content needs human correction? If it’s above 20%, the product creates more work than it saves. In 90 days, I want to know if teams are actually retiring their old wikis in favor of Peppr, or running both in parallel. Parallel is death for knowledge tools.