The Macro: Weather Prediction Is Getting Its AI Moment
Weather forecasting has been one of the most computationally expensive scientific problems for the better part of a century. The basic approach has not changed much since the 1950s: divide the atmosphere into a grid, apply the equations of fluid dynamics, and run the simulation forward in time on the biggest supercomputer you can find. NOAA operates the Global Forecast System (GFS). The European Centre for Medium-Range Weather Forecasts (ECMWF) runs the Integrated Forecasting System (IFS), which is generally considered the best operational weather model in the world. Both cost hundreds of millions of dollars to operate annually.
These models are good. They are not good enough. A 5-day forecast is roughly as accurate today as a 3-day forecast was in 1990. Progress has been real but slow, and it has come primarily from better hardware rather than fundamentally better algorithms. The physics-based approach has diminishing returns because the atmosphere is chaotic, and adding resolution to the grid provides less and less improvement as you push further out in time.
Starting around 2022, researchers began showing that neural networks trained on historical weather data could match or beat physics-based models at certain forecast horizons. Google DeepMind released GraphCast, which outperformed ECMWF’s HRES model on 90% of weather variables at lead times from 1 to 10 days. Huawei’s Pangu-Weather showed similar results. Microsoft Research built Aurora, which extended the approach beyond weather to atmospheric chemistry and air quality.
These were research demonstrations. None of them became products. The models showed that AI weather forecasting works, but they did not build the infrastructure to serve forecasts to actual customers. Insurance companies, energy traders, agricultural operations, and logistics firms all depend on weather data and would pay for better predictions. The gap between “we published a paper showing our model beats ECMWF” and “here is an API that gives you a forecast for your specific region at the resolution you need” is enormous.
That gap is where the commercial opportunity lives.
The Micro: Stanford and Cambridge Researchers Decode the Atmosphere
Silurian is building what they call a foundational model of Earth, starting with weather. Their core model is GFT (Generative Forecasting Transformer), a 1.5 billion parameter model that generates global weather simulations up to 14 days out at approximately 11 kilometer resolution. According to their benchmarks, it outperforms both NOAA’s GFS and ECMWF’s IFS by up to 30% on key metrics. They have trained models ranging from 100,000 to 10 billion parameters and released the model weights as open source.
Open-sourcing the weights is a significant decision. It builds credibility in the research community, attracts talent, and creates a moat through ecosystem rather than secrecy. It also means competitors can see exactly what you have built, which is only a good strategy if you are confident you can stay ahead on data, compute, and product development. Silurian seems to be making that bet.
The founding team is stacked with weather AI credentials. Jayesh K. Gupta is CEO and co-founder. He was formerly Head of AI at Poly Corporation and led the development of the first foundation model for weather and climate at Microsoft. He co-authored Aurora, which is one of the landmark papers in AI weather prediction. He holds a PhD in Computer Science from Stanford. Cristian Bodnar is Chief Scientist. He was a Senior Researcher at Microsoft Research where he co-led the Aurora foundation model development. His PhD is from Cambridge, and he has done stints at Google Brain, Google X, and Twitter Cortex. Nikhil Shankar is co-founder and Chief Engineer. He was a software engineer at AWS SageMaker and dropped out of a PhD in applied mathematics focused on fluid dynamics to join the company.
This is the team that literally built the predecessor technology at Microsoft and then left to commercialize it independently. That is about as strong a founding signal as you can get in deep tech. They are not approaching weather AI from the outside. They are the people who proved it works in the first place.
The company came through Y Combinator’s Summer 2024 batch. They are based in Kirkland, Washington, which puts them near the University of Washington’s atmospheric sciences department, one of the strongest in the country. The team is six people, which is small but appropriate for a company where the core work is model training and inference infrastructure rather than enterprise sales.
The product side is coming together. They have a Weather API available at earth.weather.silurian.ai and an Earth monitoring interface at earth.silurian.ai. The commercial pitch is straightforward: tailored AI models built on customer data feeds and infrastructure information, plus high-resolution regional weather forecasts with rapid refresh rates. Insurance companies need granular weather data for risk pricing. Energy traders need wind and solar production forecasts. Agricultural operations need precipitation and temperature predictions at the field level. Logistics companies need storm tracking for route optimization.
Each of these verticals currently relies on a patchwork of government forecasts, private weather companies like Tomorrow.io and DTN, and in-house meteorologists. The Weather Company provides data to most major enterprises. Tomorrow.io raised over $300 million and focuses on logistics and aviation. Climavision is building a private radar network. None of these companies are training their own foundation models from scratch.
The Verdict
Silurian is one of the most technically credible AI startups I have covered. The founding team built the foundational research at Microsoft, left to commercialize it, open-sourced the model weights to build community credibility, and is now building the product layer on top. The technical moat is real and deep. Training weather foundation models requires specialized expertise in atmospheric science, massive compute budgets, and access to decades of historical weather data. You cannot hire three engineers out of a bootcamp and replicate this.
The risk is commercialization speed. Deep tech companies with research-grade founding teams sometimes struggle with the transition from “impressive demo” to “product that enterprises pay for monthly.” The Weather API is a good sign because it shows they are thinking about productization early. But weather data is a market with established purchasing patterns, and inserting a new provider into existing workflows takes time and trust.
At 30 days, I want to see customer conversations in insurance or energy trading. Those verticals have the highest willingness to pay for better weather predictions. At 60 days, I want to know whether the 30% improvement over ECMWF holds up in real-world deployment or whether it degrades when customers need hyper-local forecasts rather than global predictions. At 90 days, the question is revenue. Can Silurian charge enough per API call to sustain the compute costs of running a 1.5 billion parameter model at inference time, or does the unit economics require scale they have not yet achieved? The science is there. The team is there. Now they need to prove the business is there too.