The Macro: Streaming Infrastructure Is Still Way Too Hard
I want to say something that might upset some backend engineers: most teams running Kafka should not be running Kafka. They set it up because they needed event streaming. They ended up with a distributed system that requires dedicated operational expertise, costs significant money to run, and punishes you severely when configuration is wrong. Confluent made it easier. Redpanda made it faster. Neither made it simple.
The streaming data market has been growing relentlessly because the use cases keep multiplying. Real-time analytics. Event sourcing. Change data capture. Agent communication. Multiplayer collaboration. IoT telemetry. Every one of these needs a durable, ordered, low-latency stream of events. But the infrastructure options have historically been: (a) Kafka or a Kafka-compatible system, which is powerful but operationally heavy, (b) a managed pub/sub service from a cloud provider, which is simpler but locks you in and can get expensive fast, or (c) rolling your own with Redis Streams or a database-backed queue, which works until it does not.
What is missing from this landscape is something that treats streams the way S3 treats files. A cloud primitive. You create streams, you write to them, you read from them. No clusters. No partitions to manage. No ZooKeeper (or KRaft, or whatever coordination layer the current Kafka version demands). Just streams, unlimited in number, bottomless in storage, available via REST.
That is the pitch for s2.dev, and I think it is exactly right for this moment.
The Micro: Three Founders, One Very Specific Bet
Shikhar Bhushan (CEO), Stephen Balogh (CTO), and Dwarak Govind Parthiban cofounded s2.dev and brought it through Y Combinator’s Fall 2025 batch. The three of them are making a bet that streams should be as easy to create and use as S3 buckets. Not easier-Kafka. Not managed-Kafka. A different thing entirely.
The core specs are compelling. Sub-50ms producer-to-consumer latency at p99 in the same region. Up to 100 MiBps write throughput per stream. Unlimited streams with no constraints on granularity (meaning you can create a stream per user, per session, per device, whatever makes sense for your data model, without worrying about hitting some partition limit). Bottomless storage backed by object storage for cost efficiency. SOC 2 compliant.
It is built in Rust, which is worth mentioning because it is not a marketing decision. Rust gives you memory safety without garbage collection pauses, which matters enormously for a system promising consistent sub-50ms latency. The team also uses deterministic simulation testing, which is the same correctness methodology that FoundationDB pioneered. If you are building a data storage system and you are not doing this kind of testing, I am skeptical. The fact that s2 does it is a good sign.
The use cases they highlight are telling. Local-first and multiplayer experiences. Agent session event sourcing. Real-time broadcast feeds. Sandbox execution streaming. These are not traditional Kafka workloads. These are the new streaming patterns that have emerged from the AI agent wave and the local-first movement. Kafka was designed for LinkedIn-scale log aggregation. s2 is designed for a world where every AI agent session, every collaborative document, and every real-time feed needs its own durable, ordered stream.
Competitors in this space sort into tiers. Kafka and Confluent own the high end. Redpanda competes on Kafka compatibility with better performance. AWS Kinesis and Pub/Sub offer cloud-native managed options. WarpStream is trying to make Kafka cheaper by decoupling storage. Upstash offers serverless Redis Streams and Kafka. But none of these are approaching the problem from the “storage primitive” angle. They are all either Kafka-shaped or queue-shaped. s2 is trying to be something more fundamental.
There is an open-source version called s2-lite for self-hosting, which is a smart move for adoption. Let developers try it locally, build muscle memory with the API, and then upgrade to the managed service when they need production reliability.
The pricing page exists but does not reveal specific tiers beyond a free offering. I would guess the model is usage-based (bytes stored and transferred), which is the natural fit for a storage-shaped product.
The Verdict
This is the kind of infrastructure bet that either becomes invisible or becomes foundational. If s2 gets the developer experience right and the reliability holds, it could become the default answer to “I need a stream” the way S3 became the default answer to “I need a file.” That is a big if, but the technical foundations look solid.
At 30 days, I want to see what the early adopters are building on top of it. The highlighted customers (SafeDep, OnKernel, Trigger.dev, Beam) suggest the agent and automation ecosystem is the beachhead. At 60 days, the question is whether the “unlimited streams” pitch holds up under real production load from customers who take it literally and create thousands of streams. At 90 days, I want latency and uptime data from production deployments. Infrastructure products live and die by their SLAs, and the first three months of production data will tell you whether the architecture can deliver on the promise.
I think the timing is right. The AI agent wave is creating new streaming patterns that Kafka was never designed for. If you need a stream per agent session, Kafka makes you think about partitions and consumer groups and cluster sizing. s2 makes you think about streams. That difference in mental model is the product.