What’s the difference between all these quantum computers?
Big tech companies are fighting for the quantum crown.
In late February, Amazon announced their Ocelot quantum chip. It was hot on the heels of Microsoft’s announcement of their topological quantum computing chip, Majorana. And that announcement, in turn, came less than two months after Google unveiled its Willow quantum chip. All of these big tech companies are clearly vying to be the big winner in the new world of quantum computing — but each company is approaching the challenge of building a scalable, practical quantum computer in a different way.
Ultimately, all quantum computers rely on constructing quantum bits, or qubits, entangling them, and performing logical operations on those entangled bits. Keeping these qubits entangled is very difficult — small amounts of noise can cause qubits to decohere, preventing the quantum computer from finishing whatever computation it started. The defining difference between Google, Amazon, and Microsoft’s quantum chips is how they try to address the challenge of noise and quantum decoherence.
Some designs, like Google’s, use well-understood qubits that are highly susceptible to noise alongside error-correcting-codes which can correct for that noise. Other designs, like Microsoft’s, use more experimental qubits that are much less proven, but may be more inherently robust to noise. Today, we’ll take a look at all of these designs, their pros and cons, and how likely each is to scale to a practical quantum computer.
Google’s Willow and Quantum Error Correction
Of the three chips, Google’s Willow uses the most mature qubit implementation: charge-based superconducting qubits. Superconducting qubits like the ones Google uses were first demonstrated in the late 1990s. This technical maturity is definitely a benefit for Google, as a lot of the fundamental theory underpinning their qubits is well-established. Unfortunately, these qubits are very sensitive to noise, and small amounts of noise can easily cause them to decohere. To solve this problem, Google’s team is leveraging quantum error correction. Specifically, they use an error correction scheme called a surface code.
Essentially, Google uses multiple physical qubits to represent the value of a single logical qubit. On Willow, they use a surface code with 97 physical qubits to represent a single logical qubit. As long as there are fewer than 3 bit-flips among those 97 qubits, the logical qubit doesn’t experience any errors. By scaling up the number of physical qubits for each logical qubit, Google’s future quantum processors could tolerate even more errors.
This poses a challenge, though. A large scale quantum computer capable of breaking classical cryptographic algorithms requires thousands of qubits. If Google’s future chips need to use hundreds or even thousands of physical qubits to build each logical qubit, that means that meaningful computation might require millions of physical qubits. This is a major scaling challenge, as building quantum chips with hundreds of physical qubits is already very difficult.
Luckily, there may be ways to build qubits that are inherently more reliable, and reduce the need for large error correcting codes. This is what Amazon is trying to do with their Cat Qubits.
Amazon’s Cat Qubits
Amazon’s new Ocelot chip leverages a different kind of qubit called a Cat Qubit. Named after Schrodinger’s Cat, these qubits have different error properties than the conventional qubits that Google uses.
When conventional qubits have errors, they can either be bit-flip errors or phase-flip errors. Google’s surface-code-based error correction has to correct for both of these kinds of errors, which is why their error-correction overhead is so high. Cat qubits are unique because they inherently suppress bit-flip errors on the physical level. That means that Amazon’s team only has to worry about phase-flip errors.
Amazon’s Ocelot chip still leverages error correcting codes, but they avoid using the complex and expensive surface codes that Google has to use. Instead, they can use a simple repetition code. This reduces the number of physical qubits necessary to represent a logical qubit significantly. Potentially, a cat-qubit-based system could require 90% fewer physical qubits than a conventional-qubit-based system.
However, cat qubits are significantly less technically mature than the more conventional qubits that Google is using. Ocelot only has 5 data qubits and 4 “ancillary” qubits to enable error correction. Google’s Willow has 105 qubits, making it 10x larger than Ocelot — so Google has quite the head start. There could also be unknown challenges with Amazon’s cat qubits that arise as they attempt to scale their systems up to larger numbers of qubits.
Ultimately, Amazon is betting on a newer, riskier technology because it may scale better than Google’s more conventional methodology. This makes sense — Google started their quantum computing project first, so its competitors have to look to riskier methods to catch up. And Microsoft is taking that idea even further, with an even more experimental kind of qubit that could be robust to both bit-flip and phase-flip errors: the topological qubit. But they’re not even sure if they’ve constructed a single one yet.
Microsoft’s Topological Qubit Claims
Microsoft is aiming to build an entirely different type of qubit, called a topological qubit. Topological qubits use a special type of quasiparticle called an anyon to realize their state, and because of the unique properties of anyons, are robust to both bit-flip and phase-flip errors.1 Microsoft’s quest to demonstrate a real-world topological qubit hasn’t been straightforward though; they’ve had to retract a paper showing evidence of topological qubits in the past.
So when Microsoft announced Majorana, their newest quantum chip, many researchers were skeptical. But Microsoft’s narrative was exciting: their CEO Satya Nadella tweeted that Microsoft “created an entirely new state of matter, unlocked by a new class of materials, topoconductors, that enable a fundamental leap in computing.” Microsoft’s own press release claims that their architecture will scale to being “capable of solving meaningful, industrial-scale problems in years, not decades.”
On the other hand, the paper Microsoft published puts forward much humbler claims. They fabricated a new kind of device structure and proposed a corresponding measurement scheme. The results of those measurements suggest, but don’t prove, the existence of a topological qubit in the device:
These measurements do not, by themselves, determine whether the low-energy states detected by interferometry are topological. However, our data tightly constrain the allowable energy splittings in models of trivial Andreev states. [...] In conclusion, our findings represent substantial progress towards the realization of a topological qubit based on measurement-only operations.
Ultimately, it’s unclear whether Microsoft actually demonstrated a topological qubit. However, I think it’s exciting that there are large companies devoting significant resources to more inherently fault-tolerant methods for quantum computation. Google is already doing a good job scaling conventional qubits and leveraging error-correcting codes, but their architecture will eventually run into scaling challenges due to the overhead of surface codes. Maybe topological qubits, with their theorized error robustness, will be able to scale much better and form the foundation for large-scale, practical quantum computers. Or maybe Microsoft will just have to retract another paper.
If you want to know why this is the case, I’d recommend asking somebody who actually understands quantum physics.