LLMs will power next-gen chip IP companies.
Combining new technologies with traditional sales pipelines.
I’ve been notably skeptical of LLMs’ ability to design chips. Verilog, the programming language most commonly used for chip design, is unforgiving, with minor differences in a block of code resulting in significantly worse performance. It’s difficult to get your hands on enough training data to fine-tune an LLM to generate Verilog, and even if you do, the performance is often much lower than human engineers. Ultimately, if you’re targeting an application where you genuinely need custom chips, you often need to deliver extremely high performance or efficiency, and LLM-generated Verilog often falls short on those metrics.
On the other hand, I’m willing to admit that LLMs are pretty good at writing small, non-performance-sensitive, somewhat boilerplate pieces of Verilog. This is genuinely useful, but not so transformative to large chip companies as to birth an LLM-powered Nvidia competitor. There are a number of companies selling so-called “Verilog copilot” software designed to generate Verilog boilerplate like this, including ChipAgents and PrimisAI.
But these Verilog copilot companies face a go-to-market problem that software copilot companies don’t. To be truly successful, these Verilog copilot companies need to sell their products to large, established, legacy semiconductor companies. And that’s a huge challenge.
Selling AI tools to legacy chip companies is hard.
Most SaaS companies get their start selling to small customers. A significant portion of the growth that AI software development copilots have shown in the past year has been driven by small customers and individual developers, rather than large organizations paying for enterprise licenses.
For Verilog copilots, there are far fewer small organizations to sell to. There are more chip startups to sell to than there were in the past, which is helping some AI-powered chip design tools make inroads in the industry, but it’s still a long shot from the world of software, where thousands of individual developers are shelling out $20/mo for Cursor.
Instead, companies selling AI-powered chip design tools need to sell to large, legacy organizations. This is difficult, especially when you’re selling a fundamentally new kind of product. Large chip companies are used to buying certain kinds of software, and AI-powered development tools currently aren’t one of them.
But there may be a way to sneak AI-powered tools into a more conventional sales pipeline that large chip companies are used to. You could use AI tools to build silicon IP cores, and then license those IP cores to legacy semiconductor companies. This could solve the problem of selling AI tools to legacy chip companies, while also solving key challenges in the semiconductor IP business model. But before we can start fixing the IP business model, we have to understand it.
What is an IP core?
IP cores are reusable pieces of logic that chip designers can purchase and place in their design. For those of you with a software background, it’s similar to a third-party software library, but with one crucial difference; software libraries are usually free, while IP cores are often fairly expensive to license. There are all sorts of IP cores available, from ARM’s CPU cores to USB controller IP to cryptography IP.
Selling IP cores is straightforward, because it’s a business model that’s been around for decades. Chip companies are used to buying IP for their systems, and IP vendors have well-established sales strategies and pipelines to sell IP to the companies that need it. Selling IP isn’t like selling AI-powered chip design tools; the buyers are comfortable with the product and are ready to make purchases when it fits their needs.
IP cores may also be easier to design with LLMs than other functional blocks of a chip. My main objection to LLM-powered chip design is the inability of LLMs to write high-quality, performance-sensitive Verilog. The core datapath of a chip often consists of relatively few lines of extremely performance-sensitive Verilog. The same isn’t true for IP cores.
While IP core performance matters, it’s often a secondary concern when it’s factored into the performance of the entire system. If your memory controller requires 10% more power than an optimal memory controller design, it’s not a big deal, because that memory controller may only take up 2% of your total power budget. At a system level, the impact of this inefficiency is only 0.2% – not that bad!
IP performance may not be a huge concern, but there are other key challenges faced by companies selling IP cores. Often, for an IP core vendor to be successful, they need to offer an extremely broad product portfolio. Synopsys, one of the biggest players in the space with 13.9% market share, offers an incredibly wide range of IP cores. Because companies selling IP cores need to maintain such a broad product portfolio, they often face a common problem: they don’t scale the same way normal startups do.
Traditional IP companies don’t scale well.
If you want to build a company selling IP cores, you need to have a large engineering staff to maintain a large number of products. But often, each individual product doesn’t bring in a ton of revenue. This means that semiconductor IP companies often don’t scale well compared to startups that are driven by a single, high-growth product.
This is the domain where LLMs are most valuable: where quantity matters over quality. If a company needed to maintain a large, diverse portfolio of IP cores, with reasonable but not onerous performance requirements, a small human team augmented by LLM-powered tools could probably do the job. This would mean a hypothetical chip IP company could stay as lean as a startup while also delivering the product breadth of a conventional IP core vendor.
This hypothetical IP core vendor would also be like a startup in another way: they would be much more focused on a single core technology. Sure, they would offer a portfolio of IP core products, but internally, all of those products would be generated and maintained by an internally-developed set of AI chip design tools. That internal set of tools would be the core technical innovation of this hypothetical company.
But, importantly, this hypothetical company wouldn’t sell those tools. As we discussed earlier, selling chip design tools is hard. Instead, by developing AI tools for internal use, and selling the IP cores those tools generate, such a company would realize the best of both worlds. They can leverage AI and LLMs to scale well, while still selling their products though tried-and-true sales pipelines for IP cores.
This is an interesting take, thanks for sharing!
While I fundamentally agree, my understanding is that companies like Synopsys primarily bundle their IP offerings along with EDA tools - so a lot of big players already have access to these IP.
I'm curious to hear what specific IPs you think could be useful today that aren't available, and can also take a performance hit?
There is another avenue for AI-based EDA startups: get acquired by Cadence / Synopsys for a sizable few million and call it a day.
Big companies are eager for AI to replace engineers whether it can happen or not. Payroll is $$, and AI that works can minimize headcount.
The AI hype cycle is the best window for startups to make this happen. Hence the startup boom.