8 Comments
User's avatar
Michal's avatar

Thank you for the post! What do you think of other compilers, like Clash? https://clash-lang.org/ Does it make sense to compile high-level code written in languages like Haskell for FPGAs, or there are too many performance ‘traps‘ that it doesn’t work well?

Expand full comment
zach's avatar

No HLS tool really beats a good human designer in terms of performance for complex circuits right now, but they may offer other advantages. Clash in particular is great for security applications, because Haskell is easier to formally verify!

Expand full comment
Michal's avatar

Makes sense :) What is the performance gap, roughly? Is it order(s) of magnitude or more like tens of percents? I have very little knowledge about FPGAs, but find it quite interesting.

Expand full comment
zach's avatar

It depends on the block you're designing, but it's usually between 1.1x and 2x worse for HLS tools. They're useful for R&D and for rapid prototyping, but when customers are buying based on performance, HLS-designed chips can't keep up.

Expand full comment
Nishant's avatar

This is a bit old but for DSP like subsystems on modern SoCs , Catapult HLS is used by teams to write higher quality RTL. HLS works but you need the HLS user to have intuition behind RTL.

Expand full comment
Dave's avatar

>If a market is growing, low-quality LLM-designed chips may help startups get a foothold

>affordably. But once the market is large enough to start justifying full chip design teams,

>human-designed chips will easily beat LLM-designed chips on performance, at least for the

>forseeable future.

This whole footnote is fundamentally wrong. Recall that all big Digital is designed with synthesis tools. Synthesis is very good and gets better every year, but if an experienced team had the money and time (many decades!) they could handcraft an equivalent circuit that was smaller and performed better. A more accurate statement within the context of this article would be that the team that chooses and uses the best synthesis tools for each task can get to market faster, and startups have fewer tools to choose between.

A far more interest interesting opportunity for AI in chip design is to abandon the whole digital compromise altogether. The engineering world adopted digital design for compute and communications 70+ years ago, to achieve a whole basketful of time to market and interoperability benefits. The cost was a terrible loss of performance vs analog compute. But there are no bits on silicon, and no signals that even vaguely resemble zeros and ones; just analog components noisily transitioning across arbitrary voltage thresholds. Within 10-20 years, AI should be able to synthesize custom analog circuits to perform any function we can describe. And just as mere humans cannot trace or understand the operation of an LLM, we will not be able to trace or understand the function of those analog circuits, only that they perform as required. Then Skynet.

Expand full comment
Thales Pereira's avatar

Hey Zack, I do see potential in using LLMs as copilots, particularly in areas like verification, where there’s a significant talent bottleneck. Automating tedious verification processes or assisting with documentation and debugging could free up engineers to focus on innovation. However, for LLMs to make a substantial impact in chip verification, they would need to move beyond general-purpose reasoning and incorporate formal models to truly "understand" chip specifications. It seems to me that the key for LLMs in this field isn’t to replace engineers but to augment their efficiency in specific, well-defined areas.

Expand full comment
Pamir Sevincel's avatar

Zach - do you see breakthroughs in hardware design once we take into account of reinforcement learning beyond the LLMs come into play? Or multi-agentic AI systems?

Expand full comment