← June 3, 2026 edition

omnara

Mobile & Web Interface for Claude Code & Codex

Omnara Lets You Run Claude Code From Your Phone, and That Changes More Than You Think

The Macro: AI Coding Agents Are Stuck on Your Desk

I want to talk about a problem that nobody in developer tooling seems interested in solving. AI coding agents have gotten genuinely good. Claude Code can scaffold projects, refactor messy codebases, write tests, and handle the kind of tedious boilerplate that used to eat entire afternoons. Codex does similar things in its own lane. These tools are real and they work.

But they all assume you are sitting at your laptop, staring at a terminal, babysitting the process. That assumption made sense in 2024 when AI coding was experimental and you needed to watch every line of output like a hawk. It makes less sense now. The agents are reliable enough that you can kick off a task and walk away. Except you cannot actually walk away, because your only interface is the machine running the agent.

This is the same bottleneck that hit CI/CD a decade ago. Jenkins was powerful but you had to be at your desk to monitor builds. Then mobile dashboards and Slack integrations showed up and suddenly you could deploy from an airport lounge. The pattern repeats.

The developer tools market is crowded with products trying to make coding faster. Cursor, Windsurf, Cody, Copilot Workspace. All of them are desktop-first, IDE-first, or terminal-first. Nobody is seriously working on the remote control problem. You start a long refactor, you leave your desk, and you have zero visibility into whether your agent finished, failed, or went sideways. That gap is real and it gets wider as agents handle longer and more complex tasks.

The Micro: Two Robotics Engineers With a Phone-First Thesis

Omnara is a mobile and web client that lets you control Claude Code and Codex running on your laptop from your phone or any browser. You can monitor active coding sessions, review diffs, approve changes, and steer the agent without being physically at your machine. It works across iOS, Android, and web.

Ishaan Sehgal and Kartik Sarangmath built this out of San Francisco as part of Y Combinator’s Summer 2025 batch. The product is free right now, which is the right move for a developer tool trying to build adoption before layering on pricing.

The core interaction model is straightforward. Your coding agent runs on your laptop as normal. Omnara gives you a remote viewport into that session. You see what the agent is doing, you see the code it is producing, you can review diffs and approve or reject changes. Think of it like a baby monitor for your AI pair programmer.

What makes this interesting is not the technology itself but the workflow it enables. I have personally started Claude Code sessions that ran for twenty minutes while I made coffee, checked email, and generally forgot about them. When I came back, sometimes the output was perfect. Sometimes it had gone in a completely wrong direction and burned tokens on work I had to throw away. If I had been able to glance at my phone five minutes in and course-correct, that waste disappears.

The product is early. The feature set is focused on monitoring and review rather than full-featured mobile coding. That is probably the right constraint for now. Nobody actually wants to write code on their phone. But reviewing a diff, approving a merge, or telling your agent to stop and try a different approach? That works fine on a small screen.

The Verdict

Omnara is solving a problem that will only get bigger. As coding agents handle longer tasks with less supervision, the need for remote monitoring goes up, not down. The question is whether this becomes a standalone product or a feature that Anthropic and OpenAI build directly into their platforms.

I think there is room for a third-party client here, at least for now. The big players are focused on making the agents smarter, not on making the interfaces more flexible. That leaves an opening for a focused team to own the remote access layer.

In thirty days, I want to see how many developers are using this daily versus trying it once and forgetting about it. Sixty days, the question is whether they have expanded beyond monitoring into meaningful remote control. Ninety days, I want to know if the big agent platforms have started copying the feature. If they have not, Omnara has a real runway. If they have, the team needs to move fast on features that are harder to replicate. The underlying bet is sound. The execution window is narrow but open.