The key challenge is that if you want to deliver those well-proven use cases or algorithms where we actually know what the speed-up is going to be, just like Shor’s algorithm, for example, you need to have so many qubits with such a depth of the algorithm that the required error rate has to be something like 10–9 or 10–10 errors per qubit per gate. And this is very far from today’s best qubits, which are 10–3 at best, or maybe, on a single qubit gate, 10–4. But there is this huge gap in performance required to deliver impact, so here, you need something else. You’re not going to improve materials or the level of control a millionfold to get there.
The good news is, there is a way to correct for errors actively, usually through some kind of redundancy. When you do error correction in general, you encode information redundantly. You do several copies of the information, and you do some sort of majority vote saying that you’re going to measure those bits — or actually, if they are quantum bits, you are not allowed to measure them directly. But you can ask, “Do you agree?” between each other. And then you’re going to see if there is some minority that disagree and correct those qubits.
The challenge is that in quantum computing, there are not one but two types of errors: You have bit flips, just like in classical computing, such as switching a zero into a one, and vice versa. But you also have a purely quantum error, called the phase flip, that switches the phase of a superposition from zero plus one to zero minus one, and vice versa. If you need one dimension of redundancy to correct for one type of error, you need a second dimension of redundancy to correct for the second type of error. This will give you the standard approach to quantum error correction, called the surface code, which is the one championed, for example, by Google recently.
When it comes to doing error correction, the leading player is not Quantinuum, because their recent experiment, as far as I understand it, was more quantum error detection. It was not yet quantum error correction, because they were not able to correct, but only to herald whether there have been errors or not.
The player that is the most advanced is Google, with their Nature paper from earlier this year, where they showed that they were just at threshold. This is something that is not discussed enough. Error correction is not only about redundancy. Yes, you’re going to need many qubits, but you also need each of those qubits to be good enough because otherwise, when you add more qubits for redundancy, you add more noise than you improve your capabilities to correct for errors. And this creates a threshold, a tipping point, for the exponential curve. And Google is just at that threshold, and we aim to be much below that threshold next year.