Earlier this week, Google announced Willow, their newest quantum computing chip. People online seem very excited, and as somebody with a background in quantum-safe cryptographic technology, I’ve been getting asked a lot of questions.
First of all, people are curious whether this new chip will break classical cryptographic schemes. While quantum computers will eventually threaten classical cryptography, cracking ECC or RSA encryption will require thousands of logical qubits, and even more physical qubits. Willow, on the other hand, only has 105 physical qubits. Quantum-safe cryptography is still very important to build future-proof systems, but for now, your secrets are safe.
More importantly, though, people want to know whether Willow is actually useful. Google is touting a benchmark showing that this new quantum chip can run a benchmark that would take a normal computer 10 septillion years. It’s a mind-boggling number, but Google is sweeping something important under the rug -- the benchmark they’re running is mind-bogglingly useless. Specifically, they’re running an algorithm called Random Circuit Sampling.
What is Random Circuit Sampling?
Random Circuit Sampling, or RCS, is a benchmark that, as far as I can tell, was created in 2018 by Google and UCSB specifically for the purpose of demonstrating quantum supremacy. That is, it was designed from the ground up to run much faster on a quantum computer than a classical computer.
Notably, this benchmark isn’t based on any realistic workload anybody might actually want to run. Instead, what the RCS benchmark does is sample from the random distribution created by a long string of noisy quantum gates. This algorithm is entirely contrived and useless. This also makes it a very bad benchmark for another key reason: there is nobody trying to develop fast classical algorithms for the quantum computer to compete against.
When we try to benchmark quantum computers on workloads like chemistry simulation, we’re comparing them to complex algorithms that have been optimized by experts with a vested interest in running efficient simulations. When Google benchmarks their quantum computers on RCS, they compare the performance against naive classical simulations of those quantum computers. Because the RCS benchmark is so contrived, this is the only valid comparison. Unlike in computational chemistry simulations, there’s no way for a classical supercomputer to perform an approximate, but practically useful algorithm and outperform the quantum computer at acheiving useful results. This is because the RCS benchmark simply measures a system’s ability to calculate the expected output of a quantum circuit. So when Google says their chips do in five minutes what takes a supercomputer 10 septillion years, they’re not giving the supercomputers a fair shot.
Google themselves note that RCS is fairly useless, and that they have yet to demonstrate a useful algorithm where Willow outperforms a classical computer. Unfortunately, they wrote this at the end of the article, after the point where most people stopped reading:
On the one hand, we’ve run the RCS benchmark, which measures performance against classical computers but has no known real-world applications. On the other hand, we’ve done scientifically interesting simulations of quantum systems, which have led to new scientific discoveries but are still within the reach of classical computers. Our goal is to do both at the same time — to step into the realm of algorithms that are beyond the reach of classical computers and that are useful for real-world, commercially relevant problems.
Also, just because this quantum computer can run a random sampling algorithm really fast, does not imply that we live in a multiverse. It just implies that we can do really fast random sampling. Frankly, I think that it’s irresponsible for Google engineers to make philosophical claims in their press releases just to get buzz on Twitter.
Now, there is one actually exciting part of Google’s announcement: significantly better quantum error correction.
What is quantum error correction?
Quantum error correction was invented in 1995 by Peter Shor. Essentially, it enables multiple noisy physical qubits to be combined to represent a single reliable physical qubit. Intuitively, it makes sense that, as you combine more physical qubits together, the reliability of the logical qubit should increase. However, in practice, this isn’t always the case. Often, our physical qubits are so error-prone that, if we add extra physical qubits to a logical qubit, the extra error that those new physical qubits introduce actually makes the logical qubits worse, not better.
The error rate at which additional qubits start actually helping, rather than hurting, quantum error correction, is called the threshold. Google’s new Willow chip is the first superconducting quantum computer that demonstrates so-called “below-threshold” error correction.
This is certainly exciting! But also, it’s still a small step forward. Even Google themselves admits it:
At current physical error rates, we might need more than a thousand physical qubits per surface code grid to realize relatively modest encoded error rates of 10-6.
Error correction needs a long way to go to overcome the significant physical error rates present in current quantum devices. That’s why, if you ask me, the most promising area of quantum computing research is materials science and device physics, not algorithms.
My take on quantum computing.
Classical computing used to be in a similar spot as quantum computing, but back in the 1940s and 1950s. Computers were unreliable and regularly suffered major errors, and algorithms had to be designed to be robust to component failure and noise. Vacuum tubes didn’t scale well, and vacuum tube computers could fill entire rooms. In a 20 year period, significant amounts of engineering effort was spent achieving relatively modest performance gains for vacuum tube computers.
Right now, I see quantum computing in the vacuum tube era, with noisy, mediocre, error prone devices that scale poorly. Quantum computing researchers are spending significant time and effort developing better algorithms and methods for suppressing noise in our existing quantum circuits.
But classical computing didn’t take over the world because we designed better vacuum tubes. It did so because we invented the transistor. The transistor was small, reliable, and scalable -- all key features necessary to build practical, large scale systems.
Right now, quantum computing lacks its “transistor”: a device-level breakthrough that dramatically increases the reliability and scalability of quantum systems. Instead, most research is being spent trying to build complex algorithms and super-cooled systems to ensure that the quantum equivalent of a vacuum tube behaves well. In my opinion, as somebody without a quantum computing background, we need to focus on the core devices to make quantum computing more reliable and scalable.
What's your opinion on the entanglement time increase?