The Macro: Data Centers Have a Power Problem Nobody Talks About
Everyone is talking about the energy consumption of AI. The training runs, the GPU clusters, the cooling systems. What almost nobody talks about is the power delivery layer sitting between the wall outlet and the chip. That layer is full of converters, regulators, and transformers that lose energy at every stage. It is old technology. It works, but it wastes a staggering amount of electricity doing so.
The numbers are big enough to make you uncomfortable. The U.S. data center market alone is projected to consume over 35 GW by 2030. A meaningful percentage of that power never reaches the processors. It gets burned off as heat inside power conversion stages that were designed decades ago and have barely improved since. The industry has optimized everything around the compute chip and left the power delivery stack basically untouched.
This is not a software problem. You cannot optimize your way out of it with better scheduling or smarter workload placement. The physics of multi-stage power conversion sets a floor on waste, and that floor is surprisingly high. Companies like Vicor and Infineon sell power modules that are incrementally better than what came before, but the fundamental architecture remains the same. Multiple conversion stages, each one losing a few percent, compounding into a real problem at scale.
The reason nobody has fixed this is that power electronics is genuinely hard. It sits at the intersection of semiconductor physics, materials science, thermal engineering, and packaging. The talent pool is small. The iteration cycles are long. Software companies can ship a fix in a sprint. Hardware companies in this space measure progress in years.
The Micro: Two PhDs Who Actually Know Power Electronics
PowerMatrix is a two-person team attacking power delivery with what they claim is an 80% smaller form factor and 50% less energy loss than conventional systems. They also claim 10x faster dynamic response, which matters for modern GPU workloads that spike and drop power demand in microseconds.
Borong Hu is the founder. He spent over a decade in power electronics, including time at GE working on power converters and at Bitmain leading power supply development for mining hardware. He has a PhD from the University of Warwick, a postdoc from Cambridge, and 60 published papers with 650 citations. He won the IEEE ECCE William Portnoy Award, which is one of the more serious recognitions in the field. His co-founder Xufu has a PhD in power electronics from Cambridge, won the Semikron Danfoss Young Engineer Award, and took first place at the IEEE ECCE student project demo. These are not people who stumbled into power electronics from a software background. They have spent their entire careers on this specific problem.
The company came through Y Combinator and is based in San Francisco. Their website at pwrmatrix.com positions the technology across data centers, semiconductors, electric vehicles, and aerospace. The data center angle is the most compelling near-term market. They claim their technology could save $10 billion annually and prevent 100 million tons of CO2 emissions. Those are big claims, but the physics argument for better power conversion is sound, and the founding team has the credentials to back it up.
What I find interesting is the timing. The AI infrastructure buildout is creating a once-in-a-generation opportunity for power hardware companies. Every new data center needs power delivery, and the incumbents are not moving fast enough. Vicor has been doing innovative work with factorized power architecture, but they are a public company with all the constraints that implies. PowerMatrix has the startup advantage of being able to focus entirely on the next-generation approach without protecting a legacy product line.
The Verdict
I think PowerMatrix is going after the right problem at the right time. Power delivery is one of the last un-optimized layers in the data center stack, and the AI buildout is making the waste impossible to ignore. The founding team is unusually strong for an early-stage hardware startup. Two deep-domain PhDs with real industry experience is not something you see often.
The risk is execution speed. Hardware startups take longer to reach revenue than software companies, and the sales cycles for data center components are measured in quarters, not weeks. They need to get reference designs into the hands of hyperscale buyers and prove reliability at scale. The technology claims are impressive on paper, but customers in this space need thousands of hours of testing data before they commit.
Thirty days from now, I want to see a working prototype in a thermal chamber, not just specs on a website. Sixty days, I want to know which potential customers are evaluating samples. Ninety days, the question is whether any of the hyperscalers or colo providers have signed a pilot agreement. The power delivery market is enormous and growing fast. If PowerMatrix can actually deliver 80% smaller modules with half the loss, they will not have trouble finding buyers. The hard part is proving it works at production scale. That is always the hard part with hardware.