The Macro: The Agent Permission Problem Nobody Solved Yet
Here’s the awkward truth about AI agents in 2026: they’re only useful if they can touch your stuff, and they’re only safe if they can’t. That tension has been the single biggest blocker to real agent deployment in production environments. Every engineering team building an AI-powered workflow hits the same wall. The agent needs to read your Slack, send emails, update your CRM, and file tickets. But giving an LLM unrestricted access to those systems is the kind of decision that gets people fired.
The current solutions are either too loose or too locked down. Some teams hardcode API keys directly into their agent pipelines, which is functionally the same as giving the intern your admin password and hoping for the best. Others build elaborate approval workflows that require human sign-off on every action, which defeats the purpose of having an agent in the first place. Neither approach scales.
Integration platforms like Zapier and Make handle connections between apps, but they weren’t designed for AI agents. Their permission models assume a human is driving the workflow, not an LLM that might hallucinate a reason to delete your production database. Composio and Nango are closer to the right idea, building integration layers specifically for AI applications, but the permission granularity still lags behind what production deployments actually need.
The companies building agents want to ship faster. The companies deploying agents want to sleep at night. Someone needs to sit between those two needs and make both sides happy.
The Micro: Guardrails at the API Layer, Not the Prompt Layer
Corsair, from Y Combinator’s Winter 2025 batch, is an open-source integration layer that gives AI agents access to external tools while enforcing permission-based guardrails. The tagline sums it up: “Give your agent the keys. Keep the control.”
Dev Jain, the founder and CEO, previously worked as a software engineer at Curri (itself a YC S19 company) and studied mechanical engineering and economics at UT Austin. The team is currently two people, which is lean even by YC standards, but the open-source model means the community is effectively an extension of the engineering team.
The product works through a permission mode system. You set each integration to one of four modes: cautious, strict, open, or readonly. Cautious mode generates review links for risky actions, requiring a human to approve before the agent can proceed. Strict locks things down further. Open lets the agent run free. Readonly is exactly what it sounds like. The key architectural decision is that these guardrails are enforced at the API layer, not in the prompt. That matters because prompt-level restrictions can be bypassed through prompt injection. API-level restrictions cannot.
Current integrations include Slack, GitHub, Linear, Gmail, HubSpot, Resend, PostHog, Google Sheets, Google Drive, Google Calendar, Discord, and Tavily for web search, with fifteen more in development. Credentials are stored using envelope encryption, and there’s multi-tenancy support for SaaS applications that need isolated credentials per customer.
The webhook handler system lets agents react to real-time events rather than polling, and there’s a plugin system for adding custom REST API integrations. The whole thing ships under Apache 2.0, which is the most permissive open-source license and removes adoption friction for enterprise customers who care about that sort of thing.
What I don’t see yet is pricing for a managed version, hosted infrastructure, or SLAs. For a two-person team, that’s expected. But it also means any company deploying Corsair today is self-hosting and self-supporting.
The Verdict
This is one of those products that solves a problem so clearly that the pitch almost writes itself. Every team building AI agents needs an answer to the permission question, and most of the existing answers are bad.
At 30 days, I’d want to see adoption metrics on the open-source repo. Stars and forks are vanity, but actual issues and pull requests tell you whether developers are building real things on top of this.
At 60 days, the question is whether Corsair launches a hosted version. Open source builds credibility, but a managed offering is where the business model lives. If the team stays at two people and self-hosted only, growth will be limited by how much support they can provide.
At 90 days, the integration count matters more than anything. Fifteen integrations is a good start, but the value of a platform like this scales directly with how many tools it connects to. If a developer’s critical integration isn’t supported, they’ll build their own, and once they do, they have less reason to adopt Corsair for anything else.
The open-source bet is smart for this category. Developers evaluating infrastructure for their AI agents aren’t going to trust a black box, especially one that handles credentials and permissions. Being able to read the code is table stakes for this kind of tool.
Two-person team, real problem, clean architecture. The risk is execution speed in a market that’s moving very fast.