There is a certain kind of audacity that leaves Google’s TPU team, walks out of Mountain View with scar tissue and conviction, and says we can build a better engine for intelligence itself. In November 2022, Reiner Pope and Mike Gunter did exactly that. Not to chase noise. To chase throughput. To chase latency. To chase a belief that large language models deserved silicon designed for them and nothing else.
Fast forward to February 23, 2026. MatX announces a $500M Series B. Bloomberg covers it the next day. The valuation sits in the several billions. Total funding now stands a little north of $600M when you stack the $25M seed and roughly $80M Series A on top. Jane Street steps in. Leopold Aschenbrenner through Situational Awareness LP leans forward. Spark Capital doubles down. Nat Friedman and Daniel Gross stay in the pocket. Patrick Collison and John Collison show up. Andrej Karpathy. Dwarkesh Patel. Triatomic Capital. Harpoon Ventures. Alchip. Marvell. This is not tourist capital. This is capital that reads die shots for fun.
The product is called MatX One. Clean. Direct. No poetry needed when the silicon speaks. A splittable systolic array built SRAM first for low latency, paired with HBM to stretch into long context. Designed to push higher throughput than any announced chip while matching the lowest latencies in market. Not optimized for everything. Not chasing convolutions or recommenders. Built for large LLMs. Training. Reinforcement learning. Prefill. Decode. The work that actually moves frontier models forward.
Reiner Pope writes software like a man who has seen the inside of TPU racks. Mike Gunter designs hardware like he knows where GPUs waste breath. Together they are betting that focus beats generality. That purpose built beats patchwork. That clusters scaling to hundreds of thousands of chips through a serious interconnect can change the math for frontier labs and cloud providers who are tired of renting someone else’s margins.
The $500M is earmarked to finish development and scale manufacturing, with tapeout in under a year. A 100 person team in Mountain View building not just a chip, but an argument. An argument that the AI race will not be won by whoever shouts loudest, but by whoever computes smartest.
MatX is a tight name. Sounds like matrix. Sounds like math. Sounds like a crossroads. When capital, talent, and timing intersect like this, you do not just get another semiconductor startup. You get a pressure point in the market. And pressure, applied correctly, has a way of revealing who is really built for scale.

