← All editions

Category: Infrastructure

110 features


Alter

Alter Built Zero-Trust Security for AI Agents Because Nobody Else Did

AI agents are calling APIs, querying databases, and executing transactions with long-lived API keys and zero oversight. Alter intercepts every tool call an AI agent makes and enforces parameter-level authorization, ephemeral credentials, and real-time guardrails. Two founders in New York are building the identity layer that the AI agent ecosystem forgot to build.

Traceroot Ai

TraceRoot Wants AI Agents to Fix Your Production Bugs Before You Wake Up

Production debugging is still mostly manual. An engineer gets paged, opens Datadog, stares at logs, traces the issue across three services, and writes a fix four hours later. TraceRoot built an open-source AI agent that connects to your telemetry, traces the root cause, and drafts the pull request. The SDK has over 10,000 downloads and the founding team fixed 300+ production bugs at Meta and AWS before deciding to automate themselves out of a job.

Manufact

Manufact Has 5 Million Downloads and NASA on Its Client List. Here Is Why MCP Infrastructure Matters.

The Model Context Protocol is becoming the standard way AI agents connect to the outside world. Manufact, formerly mcp-use, built the open source SDK that 4,000 companies already depend on. Now they are building the cloud infrastructure layer on top. With NASA, NVIDIA, and SAP as customers and 5 million downloads, this three-person team from Zurich and San Francisco is positioning itself as the default MCP platform.

Halluminate

Halluminate Is Building the Gym Where AI Agents Learn to Do Real Work

Everyone wants to build AI agents that use computers like humans. The problem is you cannot train those agents on production systems without breaking things. Halluminate builds realistic sandbox environments that replicate Salesforce, Slack, and enterprise tools so AI labs can train and benchmark computer use agents safely.

Luminal

Luminal Built an ML Compiler That Makes vLLM Look Slow

Everyone is fighting over which model to run. Luminal is fighting over how fast you can run any of them. Their ahead-of-time compiler turns AI models into optimized GPU code and is already beating vLLM and TensorRT-LLM on throughput benchmarks. Three people, $5.3 million, and a very different theory of how inference should work.