The Macro: Nobody Wants to Write Docs, But Everyone Wants Docs
Here is something that every engineering team knows but nobody wants to say out loud: documentation is almost always out of date. The code shipped three weeks ago. The API changed. The docs still reference the old endpoint. Nobody updated them because nobody wants to update them, because updating docs is the professional equivalent of doing dishes after a dinner party. Necessary. Thankless. Avoided.
The standard tools in this space treat documentation like a formatting problem. Mintlify, GitBook, ReadMe, Docusaurus. They give you a nice place to put your docs. They make them look good. They do not solve the fundamental issue, which is that someone still has to sit down and write the thing. And then keep writing it every time the product changes.
This is the gap Quantstruct is going after. Not “where do docs live” but “who writes them in the first place.” It is a meaningful distinction. The documentation tooling market has been focused on presentation for years. The creation side has been left to engineers who would rather be doing literally anything else.
The timing makes sense. LLMs are now good enough to read code diffs, understand what changed, and produce coherent technical prose about it. A year ago this would have been shaky. Today it is plausible. The question is whether Quantstruct can do it reliably enough that teams actually trust the output.
The Micro: Two Moveworks Alumni Who Understand the Plumbing
Quantstruct was founded by Sarthak Srinivas and Newman Hu. Both come from Moveworks, the enterprise AI company that builds agent platforms for IT support. Sarthak was a product lead there; Newman built search infrastructure. Newman also did time at Replit. They are part of Y Combinator’s Winter 2025 batch, two people, based in San Francisco.
The backgrounds are relevant. Moveworks is all about taking unstructured information, understanding intent, and automating actions. That is essentially what automated documentation requires: watch what changed, understand the intent behind it, and produce the right output. These two have been working on adjacent problems for years.
The product works like this. Quantstruct connects to your development tools. GitHub, Slack, Jira, Linear, Zendesk. It watches for product changes across your codebase and communications. When something changes, it drafts documentation updates and surfaces them for review. You approve, edit, or reject. It publishes to whatever Git-based docs platform you already use.
The integration list matters. Documentation doesn’t just come from code commits. It comes from Slack threads where an engineer explains a workaround. It comes from Jira tickets that describe edge cases. It comes from support conversations in Zendesk. Quantstruct pulls from all of these, which is how it assembles context that a human writer would otherwise have to piece together manually.
They have a case study with Vapi, the voice AI company, which suggests the product is past the prototype stage. The pitch to customers is a conversation with the founders, not a pricing page, which tells you they are still in the hand-hold-early-customers phase. That is normal for a team this size and this early.
The Verdict
I think Quantstruct is solving a real problem in a way that previous documentation tools haven’t attempted. The creation bottleneck is the actual pain point. Making docs look pretty was never the hard part.
The risk is trust. Technical documentation has to be accurate. An AI that writes docs that are 90% right is worse than no docs at all, because now you have docs that people think are correct but aren’t. The validation pipeline Quantstruct describes (multiple review steps before publishing) suggests they understand this, but the proof will be in how teams feel about the output after three months of use.
Competitors in the docs-as-code space like Mintlify and Fern are primarily formatting and hosting plays. ReadMe has more automation but is focused on API reference docs specifically. Nobody else is trying to auto-generate docs from development activity the way Quantstruct is. That is a real opening.
Thirty days out, I want to see how many teams are using the product without constant founder intervention. Sixty days, the question is accuracy rates. How often do teams accept the generated docs without edits? Ninety days, I want to know whether this reduces documentation lag or just shifts the bottleneck from writing to reviewing. The product idea clicks. The execution challenge is making AI output trustworthy enough for technical audiences who are, by nature, skeptical of anything they did not write themselves.