Discussion about this post

User's avatar
Afolabi's avatar

Great write up! Are you familiar with any of Jennifer hasler’s work at gatech? You seem to have a very skeptical view of fully analog networks but I’m from that lab and we’ve shown programmable analog cells as well as dense single fet multipliers all the way from 350nm down to 16nm FinFet. Its using floating gates and goes against the claim that rram or other struggling emerging memories would produce better density. Additionally we avoid the data converter trap of losing efficiency at the edge. We also tapeout our chips not relying on sims!

I can post links to all of these and papers on our custom tooling + demonstrated applications if you’re curious.

Expand full comment
Abhineet's avatar

one of my professors who works on neuromorphic computing says that the only place where it will make

sense is when you’re processing analog signals - edge computing for neural acquisition systems for example. thoughts?

Expand full comment
1 more comment...

No posts