← August 4, 2026 edition

contextfort

OS-level telemetry for AI coding agents

ContextFort Watches Your AI Agents So You Don't Have to Panic About What They're Doing

AICybersecurityDeveloper Tools

The Macro: Nobody Knows What AI Agents Are Actually Doing on Your Machine

I am going to describe a situation and you tell me if it sounds reasonable. You install an AI coding agent on your laptop. You give it access to your terminal, your file system, your environment variables, and your SSH keys. The agent can read any file, write any file, make network requests, spawn subprocesses, and install packages. You have no independent record of what it did during your session.

That is the current state of AI coding tools for most engineering teams. And it is completely insane from a security perspective.

Cursor, Claude Code, Windsurf, Copilot agents, Devin. All of them operate with broad system permissions. They need those permissions to be useful. An AI coding agent that cannot read your codebase or run your tests is not much of an agent. But the tradeoff is that you are trusting a system you do not fully control to behave responsibly with access to your entire development environment.

Traditional endpoint detection tools were not built for this problem. CrowdStrike and SentinelOne are watching for malware signatures, suspicious process trees, and known attack patterns. They are not designed to answer the question “did my AI agent quietly read my .env file and send my API keys to an external server?” That is not a malware detection problem. That is an observability problem.

The security industry has been slow to address this because the AI agent category is moving so fast that security tooling cannot keep up. By the time a security vendor builds support for one agent, three more have launched. Meanwhile, engineering teams are adopting these tools at a pace that makes security teams nervous. And rightfully so.

This is not a hypothetical risk. There have already been documented cases of AI agents leaking credentials through context windows, making unexpected network calls to model providers, and modifying files outside the scope of what they were asked to do. Not maliciously. Just carelessly, the way a very capable but unsupervised contractor might.

The Micro: One Founder, OS-Level Eyes on Every Agent

ContextFort is building OS-level telemetry specifically for AI coding agents. It sits below the agent, at the operating system layer, and records everything the agent does. File reads, file writes, network connections, subprocess spawns, package installations. Every action gets logged independently of the agent itself, which means the agent cannot tamper with or omit entries from its own audit trail.

Ashwin Ramachandran is the founder. He has a CS background from IIT Bombay and UC San Diego, and he is running a two-person team out of San Francisco as part of Y Combinator’s Summer 2025 batch. The company is lean, which makes sense for a product that needs to be technically precise rather than broadly featured.

The architecture choices are smart. ContextFort uses eBPF on Linux, the Endpoint Security Framework on macOS, and ETW combined with Minifilter on Windows. These are not application-layer hooks that can be bypassed. They are kernel-level and OS-level mechanisms that capture system calls regardless of what the agent thinks it is doing. If Cursor reads your SSH private key at 2 AM, ContextFort logs it. If Claude Code makes an outbound connection to a domain you did not expect, ContextFort logs it.

The product is not trying to block agents or make decisions about what is safe. It is answering a simpler and more useful question: what did this agent do? That is the right framing. Security teams do not want another tool that generates false positives and blocks developer workflows. They want visibility. They want an audit trail they can query when something goes wrong, and a monitoring layer they can use to set policies about what agents should and should not be doing.

The competitive landscape here is thin. Most cybersecurity companies are focused on traditional endpoint protection or cloud security posture management. Stract is doing some work on data loss prevention for AI tools. Prompt Security is focused on prompt injection and model-level threats. But nobody else is doing kernel-level observability specifically for AI coding agents. That is a narrow wedge, but it is a real one, and it grows every time another team adopts an agentic coding tool.

The product is currently demo-stage, which fits the timeline for a Summer 2025 batch company. No public pricing. The go-to-market is likely enterprise and mid-market engineering orgs that are already using AI coding tools and have compliance requirements that make “just trust the agent” an insufficient answer.

The Verdict

I think ContextFort is building something that should exist and probably needs to exist before AI agents become standard infrastructure in every engineering organization. The permission model for AI coding tools is genuinely broken. Agents have too much access and too little accountability. Someone has to build the observability layer, and building it at the OS level rather than the application level is the correct architectural decision.

The risk is timing. If enterprise adoption of AI coding agents slows down, the urgency for this product diminishes. If the major agent platforms build robust logging and sandboxing into their own products, the third-party observability play gets squeezed. Cursor could ship an audit log feature tomorrow and take a big chunk of the value proposition.

In thirty days, I want to know how many engineering organizations are running pilots. This is a product where design partners matter more than public launch metrics. In sixty days, I want to see whether the telemetry data is actionable or just noisy. Logging everything is easy. Making that data useful for security decisions is the actual product challenge. In ninety days, the question is whether ContextFort can turn visibility into policy enforcement without becoming the kind of heavy-handed security tool that engineers route around. The line between “helpful monitor” and “annoying blocker” is thin, and staying on the right side of it will determine whether this gets adopted or uninstalled.