Why did Nvidia acqui-hire Groq?
And why not SambaNova, Cerebras, or d-Matrix?
On December 24th, the semiconductor industry got an early Christmas present with the announcement that Nvidia was acquiring Groq. The CNBC article breaking the story originally read:
Within the hour, Groq made their own announcement:
Instead of talking about an acquisition, Groq described a non-exclusive licensing agreement in which key Groq executives and engineers joined Nvidia, but Groq continued as an independent company. Shortly afterwards, the CNBC story was updated to reflect this reality.
Groq also didn’t mention the $20B number anywhere in their announcement, and Nvidia has yet to make any public comment. However, I’ve since gotten independent confirmation from an anonymous source with knowledge of the deal that it was, in fact, worth $20B.
It’s not a good look that startups are getting gutted in weird reverse-acqui-hire deals that strip key talent and technology without necessarily bringing all of the employees along for the ride. But I’m not an expert on M&A regulations, and this is Zach’s Tech Blog, not Zach’s Regulatory Compliance Blog, so I’m not going to discuss the specific deal structure here.
The important thing is that Nvidia did genuinely pay twenty billion dollars for Groq’s technology and key talent. From the outside, Groq is not worth $20B. Their projected revenue for 2025 was $500M, so the price-to-sales ratio for this deal was an eye-watering 40 — 4x higher than the ratio of companies like Apple and Google. The first version of the LPU chip was taped out in 2020, making it woefully outdated compared to other companies’ more recent chips. The TCO of a Groq cluster is unreasonably high, requiring hundreds of chips worth millions of dollars in total to run relatively small open-source models like Llama70B. Jonathan Ross and the other Groq executives don’t have a deep knowledge of Google’s TPU architecture, as some analysts are claiming; they left Google to found Groq after the first generation of the TPU, and Google is on the 8th generation by now. So… why do I think Nvidia spent $20B on Groq? Here are a few possibilities.
Fair warning: this is all wild speculation. But speculation is fun!
The LPUv2 is something special.
Dylan Patel of SemiAnalysis was quoted in the Information with this theory:
“Groq’s first-generation chips were not competitive [with Nvidia’s chips], but there are two [more] generations coming back-to-back soon,” said Dylan Patel, chief analyst at chip consultancy SemiAnalysis. “Nvidia likely saw something they were scared of in those.”
He’s right about the LPUv1 being an uncompetitive chip. As for the LPUv2, Groq has done a very good job keeping architectural details of their new chips from leaking. We do know that it’s being manufactured in Samsung’s 4nm process, which may be important — more on that later.
But unless Groq has cooked up some 3D stacked SRAM architecture that blows Nvidia out of the water, I’m personally doubtful that there’s something in the LPUv2 that makes it significantly better than Nvidia’s chips. Groq had some major layoffs and lost a ton of great talent to attrition, including their old Chief Architect. That’s not a great recipe for a company that’s suddenly able to out-perform Nvidia at scale.
Groq has some unique partnership Nvidia wants.
Maybe Groq has some sales partnership or other business deal that Nvidia really wants access to. Given the “non-exclusive licensing deal” structure, I’m not sure how that would work, but it’s certainly a possibility. For all of its technical weaknesses, Groq’s business development team has proven to be particularly shrewd, turning mediocre and outdated silicon into multiple huge funding rounds. They were the first AI chip startup to pivot to selling inference tokens as their primary product, rather than trying to sell chips to hyperscalers — and most of their competitors have been following suit.
I’m not sure what partnership Groq has that Nvidia might want, though. The only major AI lab that Groq has a partnership with is Meta, which offers Groq’s cloud as an optional inference provider in the Llama4 API. But that partnership isn’t unique — Cerebras is offered as another API inference provider on equal footing to Groq.
A lot of Groq’s other big partnerships are with Middle Eastern countries and governments. They have partnerships with the Saudi Arabian government as well as various Saudi companies to deploy Groq systems at scale. While these partnerships are valuable, they’re certainly not unique. Cerebras also has a large number of Middle Eastern partnerships — in their case, with the UAE.
So Groq doesn’t have any publicly announced partnerships that are so unique that they could justify a $20B price tag. There could be something that isn’t public yet, of course, but going purely based on public info, I think there’s only one thing that could make Groq this valuable: the fact that Groq’s chips could help Nvidia make their supply chain more resilient and less reliant on TSMC.
Supply chain resilience, and reducing reliance on TSMC.
This is my personal pet theory, so I saved it for last. Of all of the different AI chip startups, Groq is unique in that its product just consists of a straightforward logic die manufactured in a Samsung process node. SambaNova relies on TSMC’s Chip-on-Wafer-on-Substrate (CoWoS) packaging to leverage high-bandwidth memory (HBM), just like Nvidia’s chips. Furiosa, Etched, and MatX all use, or plan to use, HBM as well, so they also need a 2.5D packaging solution to package the HBM next to the logic die. And each 2.5D packaging solution is tied to a specific foundry and packaging provider — probably TSMC.
Other architectures without HBM also have foundry-specific features. d-Matrix uses TSMC-specific, custom SRAM bit-cells for their processing-in-memory architecture, as well as an organic interposer for their chiplet-based packaging solution. And Cerebras’s wafer scale engine requires a ton of specialized processing that was developed in collaboration with TSMC.
Nvidia’s Blackwell B200 and upcoming Rubin architectures are already incredibly reliant on TSMC’s CoWoS packaging, which has limited production volume and is incredibly expensive. If Nvidia wants to make their supply chain more resilient and make themselves less dependent on TSMC, there are very few acquisitions they could make that would help them out. SambaNova, Furiosa, Etched, MatX, Cerebras, and d-Matrix all rely on some foundry-specific feature that would make their design extremely hard to port from one process node to another. That means that if TSMC suddenly runs out of production capacity, raises CoWoS prices, or even worse, gets invaded by mainland China, Nvidia would still be out of luck.
Groq’s chips, on the other hand, are already manufactured in a Samsung process, and could easily be ported to an Intel process. By relying on a simple logic chip that can be manufactured by multiple foundries, the supply chain for Groq’s chips is extremely resilient, even in the case of surging demand for AI chips, HBM, and advanced packaging processes. This idea isn’t new; in the past, Apple has manufactured the same chip on different process nodes to ensure they had sufficient capacity for the immense demand of new iPhone launches.
By acquiring Groq’s assets, Nvidia can now sell more AI chips than TSMC has CoWoS production capacity to make for them. They can sell more AI chips even if they’ve gone through their entire HBM allocation. Groq’s chips are worse than Nvidia’s own Blackwell B200s, but Nvidia is already going to be able to sell every B200 they can possibly produce. The LPUs mean that Nvidia can keep selling chips even after the limited B200 stock runs dry.
By acquiring Groq’s assets, Nvidia can now sell more AI chips than TSMC has CoWoS production capacity to make for them.



