← September 12, 2026 edition

hyperspell

Memory for AI Agents

Hyperspell Gives AI Agents a Memory, and That Changes Everything

The Macro: AI Agents Have Amnesia and Nobody Talks About It

The entire AI agent ecosystem has a dirty secret. Every agent starts every session from zero. It does not remember your last conversation. It does not know your coworker’s name. It does not understand that the “Q3 launch” you mentioned yesterday is the same project you are asking about today. This is not a minor UX inconvenience. It is a fundamental limitation that makes most AI agents feel like interns on their first day, every single day.

RAG was supposed to fix this. Retrieval-augmented generation lets you stuff relevant documents into the context window so the model has something to work with. And RAG works, to a point. But it is retrieval, not memory. There is a massive difference between searching a document store and actually understanding the relationships between people, projects, and conversations that accumulate over weeks and months of work.

The companies trying to solve this are approaching it from different angles. LangChain and LlamaIndex offer retrieval infrastructure but leave the memory problem to developers. Mem0 is building memory layers specifically for AI apps. Zep does session memory for chatbots. Pinecone and Weaviate provide the vector database layer but not the intelligence on top. Nobody has nailed the full stack: ingest data from everywhere, build a knowledge graph, and serve contextual memory to agents in real time.

The market timing is interesting because we are right at the inflection point where AI agents are going from demos to production deployments. Companies like Cognition, Devin, and dozens of others are shipping agents that need to operate autonomously over hours or days. Those agents need persistent memory or they will keep making the same mistakes and asking the same questions. The memory infrastructure layer is going to be as important to the agent ecosystem as databases were to the web application ecosystem. That is not hyperbole. It is just the logical consequence of agents that actually do real work.

The Micro: An Airbnb Knowledge Graph Architect and a Checkr API Boss

Hyperspell connects to your Slack, Gmail, Notion, and Google Drive, then builds what they call an “Agentic Memory Network.” It extracts people, projects, facts, and relationships from your existing data and surfaces that context to AI agents through an SDK. The system learns continuously, so the memory gets better with every query and conversation.

Conor Brennan-Burke led a $30 million ARR API business at Checkr, building infrastructure that companies like DoorDash and Airbnb depended on. Manu Ebert is a four-time founder with two exits and over 15 years in machine learning. His last ML startup was acquired by Airbnb, where he built Airbnb’s first Knowledge Graph. That specific credential matters enormously here. Building a knowledge graph for one of the most complex consumer marketplaces in the world is exactly the kind of experience you need to build memory infrastructure for AI agents. It is not a transferable-skills argument. It is literally the same technical problem at a different abstraction layer.

They came through Y Combinator’s Fall 2025 batch. The product is live with paying customers. Hobbes, Scale Agentic, and Intently are among the companies using it. The testimonials are worth reading. Anna Yuan from Scale Agentic said Hyperspell helped them go from concept to paid pilots in 48 hours. Anish Chopra from Intently said they onboard new customers five times faster. Those are not vanity metrics. Those are operational improvements that directly impact revenue.

The integration story is strong. One line of code to add Hyperspell to an existing agent. Pre-built components for user account linking with automatic authentication. SOC 2 certified and GDPR compliant, which matters for enterprise sales. They are processing thousands of documents across multiple data sources and shipping new integrations weekly.

The competitive question is whether memory becomes a feature or a platform. If every agent framework adds basic memory capabilities, Hyperspell could get squeezed. But if memory turns out to be genuinely hard to do well at scale, and I think it will, then the specialized infrastructure play wins. Vector databases tried to be a commodity and Pinecone still built a billion-dollar business because the details matter more than the concept.

The Verdict

I think Hyperspell is building one of the most important pieces of the AI agent stack. Memory is the difference between an AI assistant that is occasionally useful and one that is genuinely indispensable. The team has the exact right background for this problem, and the early customer traction suggests the product works.

At 30 days, I want to see how many agent platforms have integrated the SDK and what the retention looks like after the initial onboarding. At 60 days, the question is data volume. How many documents are flowing through the system, and is the knowledge graph actually getting smarter or just getting bigger? Those are very different outcomes. At 90 days, I want to see whether Hyperspell is becoming a dependency that agent builders cannot rip out, the way Stripe became a dependency for payments. If developers start designing their agents around Hyperspell’s memory model rather than bolting it on after the fact, that is the signal that this is infrastructure, not a feature. And infrastructure companies tend to win very big.