Something seismic just dropped in Santa Clara. d-Matrix, the deep-tech juggernaut reengineering how generative AI thinks, just secured $275M in Series C funding, pushing its valuation to a cool $2B. The round was co-led by Bullhound Capital, Triatomic Capital & Temasek, with new money from Qatar Investment Authority (QIA) & EDBI, plus repeat conviction from M12 (Microsoft’s VC arm), Nautilus Venture Partners, Industry Ventures & Mirae Asset. Morgan Stanley ran point as placement agent, while Wilson Sonsini Goodrich & Rosati kept the paperwork clean, because when you’re scaling silicon this fast, you need your legal silicon just as tight.
Founders Siddharth Chinubhai Sheth & Sudeep Bhoja saw it before the crowd. Back in 2019, while the world chased AI training glory, they were betting on inference, the part where the models actually run, answer, think, and cost a fortune to keep alive. They saw the bottleneck coming and built a new brain for the data center: Digital-In-Memory Computing. Their Corsair™ accelerators don’t just compete with GPUs, they run 10× faster, at ⅓ the cost, and up to 5× better energy efficiency. One d-Matrix rack can replace 10 traditional DCs. That’s not a tweak, that’s an industrial reset dressed like an upgrade.
The company’s now scaling like a startup that skipped the awkward teenage years. 250+ people across Santa Clara, Toronto, Sydney, Bengaluru & Belgrade are building the kind of compute muscle hyperscalers dream about. With Sree Ganesan (VP Product) & Peter Buckingham (SVP Software) driving software integration & product execution, d-Matrix is blending hard tech with real-world deployment rhythm.
The SquadRack™ partnership with Arista, Broadcom & Supermicro hits another gear. It’s the industry’s 1st disaggregated, standards-based rack-scale solution, built to let enterprises run 100B-parameter models without ripping out infrastructure. And with availability through Supermicro starting Q1 2026, this isn’t a concept slide; it’s a shipping revolution.
Investors are calling it out loud. Jeff Huber of Triatomic Capital says d-Matrix cracked the code on performance & sustainable economics at scale. Michael Stewart from M12 calls it the 1st AI chip startup to solve LLM unit economics. Add Per Roman of Bullhound Capital to the mix, and you’ve got a consortium that reads like a global deep-tech playlist.
Series C funds fuel 3 big moves: 3D memory stacking for next-gen density, Aviator™ software expansion for developers, & large-scale deployments with hyperscalers and sovereign clouds already in play. This is inference as an economic engine, not just a technical upgrade.
In a world chasing AI speed, d-Matrix found efficiency with attitude, 10× faster, 3× cheaper, 5× greener. Everyone else is still talking about AI’s future. d-Matrix is busy building it.

