The Macro: Session Replay Is Powerful but Nobody Watches the Tapes
Every product team knows they should be watching session replays. The data is right there. Real users, doing real things, hitting real problems. Hotjar, FullStory, LogRocket, and Smartlook have made session recording standard. The problem is that nobody has time to actually watch them.
I have been on product teams that recorded thousands of sessions per day. The recordings sat in a dashboard. Occasionally someone would pull up a replay when investigating a specific bug report. But the proactive use case, watching replays to discover bugs and UX issues that users have not reported, basically never happened. It is too time-consuming. A single session can take 20 minutes to watch. Even at 2x speed, reviewing 50 sessions takes an entire day. No product manager or engineer has that kind of bandwidth.
The result is that companies collect massive amounts of behavioral data and do almost nothing with it. The bugs that get fixed are the ones users complain about loudly enough. The UX issues that get addressed are the ones that show up in metrics like conversion rate or churn. Everything else, the frustration clicks, the confused navigation paths, the subtle visual bugs that make the product feel broken, goes unnoticed until it accumulates into a vague sense that the product “does not feel right.”
A few companies have tried to solve this with analytics. Amplitude and Mixpanel provide behavioral analytics that can surface patterns. PostHog offers open-source session replay and analytics. Heap auto-captures events. But these tools show you what happened, not what went wrong. You still need a human to interpret the data and identify the problems.
Lucent, backed by Y Combinator (W25), is taking a different approach. Instead of showing you data and hoping you find the problems, their AI watches the sessions, identifies bugs and UX issues automatically, and tells you exactly what needs fixing.
The Micro: AI Bug Detection From Session Replays
Alisa Rae founded Lucent. The product sits at the intersection of session replay (like Hotjar or FullStory), automated testing (like Selenium or Cypress), and product analytics (like Amplitude). But instead of being a tool in any of those categories, it creates a new one: autonomous product improvement.
The core idea is that AI can watch user sessions the way a very attentive QA engineer would, but at the scale of every session rather than a random sample. The AI looks for patterns that indicate something is wrong. A user clicking the same button multiple times with no response. A user navigating in circles. A user abandoning a flow at an unusual step. A visual element that renders differently than expected. These are all signals that a bug or UX problem exists, and they are signals that current monitoring tools do not flag.
The “AI product manager” positioning goes beyond bug detection into proactive improvement suggestions. If the system notices that 40% of users struggle with a particular form field, it does not just report the problem. It suggests the fix. That is a meaningful step up from traditional analytics, which tells you the what but leaves the why and the how to you.
The competitive question is how this compares to FullStory’s built-in analytics, which already identifies “frustration signals” like rage clicks and dead clicks. The answer, I think, is scope. FullStory flags individual interaction-level signals. Lucent appears to analyze entire session flows and identify systemic patterns, which is a harder problem but a more valuable one.
The product requires JavaScript to run (the site displays a “you need to enable JavaScript” message without it), which tells me this is a heavy client-side application, likely built as a React or similar SPA. For a tool that processes session replay data, that makes sense.
The Verdict
The vision is compelling. Turning the massive amount of session replay data that companies already collect into actionable product improvements without requiring a human to watch the tapes is genuinely useful.
At 30 days: how accurate is the bug and UX issue detection? If it flags 100 issues and 60 are false positives, it becomes noise that teams ignore. The signal quality has to be high enough to earn attention.
At 60 days: are product teams actually implementing the suggested fixes? Detection without action is just a better dashboard. If teams are shipping changes based on Lucent’s recommendations and seeing improved metrics, that completes the loop.
At 90 days: how does Lucent handle the privacy implications of AI watching user sessions? GDPR, CCPA, and other privacy regulations apply to session replay data. Any AI processing of that data needs to be compliant, and the compliance story needs to be clear enough for enterprise buyers.
I think this fills a genuine gap in the product development workflow. The data exists. The analysis capacity does not. Lucent is betting that AI can bridge that gap, and I think they are right. The execution will determine whether this becomes a new category or a feature that gets absorbed into existing analytics platforms.