The Macro: The Gap Between AI Demos and Physical Reality
There’s a growing divide in the AI world that doesn’t get enough attention. On one side, you have language models and image generators getting better every quarter, shipping products, generating revenue, and attracting billions in investment. On the other side, you have physical AI, robots that can actually interact with the real world, which is progressing much more slowly despite arguably being more important long-term.
The challenge is fundamental. A chatbot that hallucinates gives you a wrong answer. A robot arm that hallucinates drops a bomb on the wrong target or crushes a patient’s organ during surgery. The error tolerances in physical AI are orders of magnitude tighter, and the training data is orders of magnitude harder to collect. You can’t scrape the internet for robot manipulation data the way you can for text.
The current approaches fall into a few camps. Boston Dynamics builds incredible hardware but has struggled to find commercial applications beyond warehouse logistics and inspection. Covariant (now part of Amazon) focused on warehouse picking with reinforcement learning. Figure AI raised billions to build humanoid robots but is still pre-revenue. Toyota Research Institute has published impressive manipulation research but operates at academic timelines. In defense, companies like Anduril and Shield AI are building autonomous systems, but their focus is platforms and sensors, not the foundational manipulation AI.
The missing piece is general-purpose manipulation intelligence. A robot brain that can learn to handle new objects, new tasks, and new environments without being reprogrammed from scratch. The teleoperation data problem is particularly acute. You need human demonstrations to train manipulation models, but collecting that data requires expensive hardware setups and skilled operators. Whoever solves the data pipeline for physical AI will have a significant structural advantage.
The Micro: A Solo Founder With a GitHub Repo and a Defense Pitch
General Trajectory is building AI for autonomous defense systems and scientific R&D applications. The company focuses on robot manipulation and has open-sourced a teleoperation stack called dex-teleop on GitHub. The website is sparse, just the company name, which is either a sign of extreme stealth or extreme early stage. Probably both.
Joshua Belofsky is the sole founder. He’s a recent graduate with a background in academic machine learning research. He came through YC’s Winter 2025 batch with Harj Taggar as his primary partner. The team size is listed as one, which makes this one of the leanest companies in the batch.
That open-source teleop stack is worth paying attention to. Teleoperation is how you collect training data for robot manipulation models. A human operator controls a robot remotely, and those demonstrations become the training set for autonomous behavior. The quality of your teleop system directly determines the quality of your training data, which directly determines the quality of your autonomous AI. By open-sourcing this tool, General Trajectory is doing two things: building community and establishing credibility in a field where published work matters.
The defense angle positions the company in a market with enormous budgets and urgent demand. The Department of Defense is actively seeking autonomous systems that can operate in contested environments. The Replicator initiative aims to field thousands of autonomous platforms. Companies like Anduril, Shield AI, and Skydio have demonstrated that defense tech startups can scale quickly with the right contracts. But most of these companies are building drones, sensors, and software platforms. Few are focused on the manipulation layer, the AI that lets a robot physically interact with objects in unstructured environments.
The Verdict
I think General Trajectory is one of the most interesting bets in the YC Winter 2025 batch precisely because it’s hard to evaluate from the outside. There’s no polished landing page, no customer testimonials, no demo video. There’s a GitHub repo, a YC profile, and a founder with ML research credentials. That’s either nothing or exactly what the earliest stage of a significant robotics company looks like.
The solo founder question is real. Robotics companies typically need hardware expertise, software expertise, and domain expertise in their target market. Doing all of that as one person is extraordinarily difficult. The defense sales cycle alone requires navigating SBIR grants, contracting officers, and security clearances. Most defense tech founders bring a co-founder with military or government experience specifically because the sales motion is so different from commercial tech.
At 30 days, I’d want to see who’s using the open-source teleop stack and what kind of data they’re collecting with it. At 60 days, the question is whether there’s a defense contract or pilot in progress. The money in defense AI is real but slow to arrive. At 90 days, I’d look for a co-founder announcement or early team hires. The vision is compelling, and the technical direction makes sense. The execution risk is high because the problem is genuinely hard and the team is genuinely small. But the upside of getting physical AI right is so large that even a long shot in this space is worth watching.