The Macro: Insider Threats Cost More Than Hackers and Nobody Wants to Talk About It
Cybersecurity spending has been almost entirely focused on keeping outsiders out. Firewalls, endpoint detection, zero trust architecture, SOC teams staring at dashboards. Billions of dollars flow into stopping the person who doesn’t work at your company from getting in.
Meanwhile, the people who already work at your company walk out with client lists, trade secrets, and proprietary code on a regular basis. The Ponemon Institute estimates that insider threat incidents cost organizations an average of $15.4 million annually. That number has gone up every year for the past decade. The most expensive breaches in recent memory, from Edward Snowden to the SolarWinds compromise to countless cases of IP theft at tech companies, involved people with legitimate access.
The tooling for detecting insider threats has traditionally been crude. DLP (data loss prevention) systems that flag when someone tries to email a file with “confidential” in the name. SIEM rules that trigger when someone accesses a system they don’t normally use. User behavior analytics platforms that build baseline activity profiles and alert on deviations.
These tools share a common limitation. They watch actions, not intent. They can tell you that someone downloaded 500 files at 3 AM. They can’t tell you that someone has been gradually shifting their language in Slack messages over the past six weeks in ways that correlate with employees who are about to leave and take client relationships with them.
LLMs can do that second thing. At least in theory. And that’s the bet Haleum is making.
The Micro: Stanford Trio Builds the Surveillance Tool Nobody Asked For
Haleum came out of Y Combinator’s W25 batch with three co-founders, all connected to Stanford.
Adarsh Ambati is the CEO. His focus is on building insider threat detection software specifically designed for an era when AI makes both the threats and the detection capabilities more sophisticated. Ansh Gupta is the CTO. Aditya Iyengar rounds out the founding team. He’s Stanford class of 2025, with prior engineering experience at Flutterflow (itself a YC W21 company), Uber, and NASA.
That’s a strong technical bench for a three-person team. The Flutterflow and Uber experience means they’ve built production systems at scale. The NASA background suggests comfort with high-reliability environments where mistakes are expensive.
The product monitors communications channels using LLMs to detect financial fraud, IP theft, compliance violations, and insider threats. The key word is “communications channels,” which presumably means Slack, email, Teams, and whatever else employees use to talk to each other.
This is where it gets interesting and slightly uncomfortable. Communications monitoring isn’t new. Financial services firms have been recording and reviewing trader communications for regulatory reasons for decades. Bloomberg terminals log everything. Compliance teams at banks review flagged messages daily. But those systems use keyword matching and rules-based detection. They catch the obvious stuff and miss everything else.
LLM-powered monitoring is qualitatively different. It can understand context, sarcasm, coded language, and the subtle shifts in tone that precede problematic behavior. A trader who starts complaining about their bonus in September and begins using ambiguous language about “outside opportunities” in October is exhibiting a pattern that a language model can detect and a keyword filter cannot.
The company’s Twitter handle is @sophrisai, which suggests the company may have operated under a different name before settling on Haleum. That’s common for early-stage companies and not a red flag.
The Verdict
Haleum is building in a space that makes people uncomfortable, and I think that discomfort is part of the product-market fit.
Companies need to monitor communications for compliance and security reasons. They know this. They don’t love admitting it. The ones that do it well, primarily large financial institutions, use expensive legacy systems that catch obvious violations and miss sophisticated ones. The ones that don’t do it well suffer the consequences in the form of regulatory fines, IP theft, and client poaching by departing employees.
LLMs are genuinely better at understanding human communication than any prior technology. Applying that capability to insider threat detection is logical, powerful, and raises real questions about employee privacy, false positives, and the boundary between security monitoring and surveillance.
At 30 days, the critical question is false positive rate. If the system flags too many innocuous conversations, compliance teams will ignore it the same way they ignore noisy SIEM alerts. The value of LLM-based detection only materializes if precision is high enough that flagged conversations actually warrant review.
At 60 days, I’d want to know about the deployment model. Is this a SaaS product that ingests communication data via API integrations? An on-premise solution for firms with strict data residency requirements? Financial services firms in particular will have strong opinions about where their employee communications data lives.
At 90 days, the question is whether regulated industries adopt this or resist it. Banks and asset managers have the clearest need and the most established compliance infrastructure. But they’re also the most cautious about new vendors touching sensitive data.
The market is there. The technology is capable. The team is strong. The question is whether Haleum can navigate the sales cycle in an industry where trust is earned slowly and the consequences of a false positive can ruin someone’s career.