Happy Christmas, Merry Hanukkah, Joyous Kwanzaa, and have a decent Festivus -- it’s the holidays! I’m far too busy drinking eggnog, mulled wine, and other wonderful holiday beverages to write a proper blog article analyzing the latest AI chip or security vulnerability, so I wanted to instead put together an end-of-year roundup!
I started this blog in February, and now it’s December, so that’s nearly a year of articles! That first article about Algorand’s security went out to 2 subscribers (me and my mom) and only got 100 reads or so. Now, I have over 600 subscribers, and my most popular article got over twenty-five thousand views. So first off, I want to say…
Thank you all so much for reading!
I never expected to get this much traction, and I really appreciate everybody who reads, comments, likes, and sends me their thoughts on what I’m writing. I’m really lucky to be able to write stuff that people actually read and appreciate. Thank you all.
Now, let’s look through the past year and highlight a few of my favorite articles!
My Favorite Non-Technical Article
Most of what I write here is fairly technical, whether that’s analyzing new chip architectures or writing about startups and their markets. But I occasionally article non-technical stuff, whether that’s about cocktails or about life in different cities. Of all of the non-technical articles I put out this year, though, there was one clear standout:
It was a bit controversial, but I do still really believe that most engineers, especially in hardware, are better served by getting actual degrees rather than dropping out. Not only do degrees help give you job security, but even in the world of startups, investors and customers care more about degrees than you probably think.
I’ve actually had a couple kids reach out to me to tell me that they read this article when they were considering dropping out themselves. I’m sure that I wasn’t the only factor in their decision, but it’s cool for my work to actually have a meaningful impact on people’s lives. I’m excited for them to get their degrees, and then to go out into the world and use those degrees to build awesome stuff!
The Article I’m Most Proud Of
A lot of my pieces require a fair amount of work. From interviewing founders to benchmarking algorithms, I like to go above and beyond just writing down my takes from time to time. But by far, there was one article that required the most effort, which I’m incredibly proud of:
Longtime readers probably saw this one coming -- my overview of my FPGA-based coupled-oscillator architecture was one of the first really popular articles I wrote. I called it Digital Ising Machines from Programmable Logic Easily, or DIMPLE.
DIMPLE is a state-of-the-art oscillator-based Ising machine, outperforming chips developed by well-funded research labs. More importantly, DIMPLE is incredibly accessible. The code is all open source, and can be deployed on AWS F1 FPGAs easily and affordably.
It’s not often that a hobbyist gets a chance to design state-of-the-art computing architectures that are competitive with cutting-edge research. DIMPLE was the culmination of months of work, and I’m super proud that I was able to share it with all of you.
My Most Popular Article
While most of this round-up is focused on the articles I like the best, I want to include one article that you, the readers, clearly really loved. I’ve has a couple articles get thousands of views, from my recent analysis of d-Matrix’s Corsair chips to my old overview of energy-based computing. But my most popular article is pretty undeniable:
LLMs for chip design have been in the spotlight lately, and a lot of chip designers I know are incredibly skeptical. I think this broad skepticism is what helped get this post to the front page of Hacker News! Thanks to whoever submitted the link to HN, by the way.
I decided to approach the problem realistically, by comparing LLMs to the last technology that promised to increase designer productivity: high-level synthesis, or HLS. For those of you who don’t remember, HLS essentially failed to make an impact on performance-sensitive subsystems of production chips. I get the sense that LLMs are going to do the same.
However, there are parts of the chip design process that aren’t performance sensitive. Verification is about quantity of tests as much as it is about quality of tests, and LLMs are great at quantity. So there is hope for LLMs in chip design -- but they certainly won’t be designing processor data-paths anytime soon.
My Favorite Article
Last but not least comes my single favorite article of this whole year. While I write a lot about AI chips, some of my favorite articles this year have actually been about hardware security. From an overview of side-channel analysis to a breakdown of one of the coolest hardware hacks of the year, I’ve really enjoyed writing about hardware security. But my favorite article of all, from this entire year, was actually about the overlap between hardware security and AI chips:
As AI models have become more expensive to train and more valuable, AI companies have started to think much more seriously about making sure they stay secure. OpenAI has even started to propose more secure GPU hardware for protecting model weights for their most cutting-edge models. Unfortunately, secure hardware is usually much more expensive, in terms of silicon die area and power, than its insecure counterparts.
Luckily, there are some emerging techniques for ultra-efficient AI chips that may also make these chips easier to secure: approximate arithmetic. Certain kinds of approximate arithmetic can make a chip much cheaper to implement securely, while still delivering sufficiently accurate model inference. In the new year, I may even try to develop an open-source implementation of one of these secure, approximate AI architectures, and test it using my ChipWhisperer.
Anyways, that’s my year in review! Thank you all again for reading, and I’m very excited to keep sharing fun stuff about tech in the new year.
- Zach from Zach’s Tech Blog