# Transcript | Schrödinger’s Cat Qubits— with Alice&Bob

Quantum computing needs logical, error-corrected qubits to reach the ultimate goal of fault-tolerant systems that can change the world. Without logical qubits, we won’t be able to have production-ready business use cases that are pure quantum. Is it too early to be thinking about creating these “perfect” qubits? One company says it’s already tackling the problem with a new take on the old quantum-classical Schrödinger’s Cat thought experiment. Join Host Konstantinos Karagiannis for a chat about cat qubits with Théau Peronnin from Alice&Bob.

**Guest:** Théau Peronnin from Alice&Bob

**Konstantinos Karagiannis:**

Quantum computing needs logical error-corrected qubits to reach the ultimate goal of fault-tolerant systems that can change the world. Is it too early to be thinking about creating these perfect qubits? One company says it’s already tackling the problem with a new take on an old thought experiment. Find out how in this episode of *The Post-Quantum World*. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is the CEO of Alice & Bob, Théau Peronnin. Welcome to the show.

**Théau Peronnin:**

Thanks for having me today.

**Konstantinos Karagiannis:**

Tell us about how you found your way to quantum before cofounding Alice & Bob.

**Théau Peronnin:**

I started in quantum because I wanted to do physics, but I was not a genius at math. I was good enough, but I wanted to do foundational physics. I started physics back in 2012, and it was just at the time of the Nobel Prize of Serge Haroche and David Wineland about manipulating individual photons and ions. And it felt incredibly fun to realise those experiments, playing with individual photons experimentally, and that’s what I wanted to do. I did my undergraduate at École Polytechnique and then specialised in quantum physics at École Normale Supérieure and did my Ph.D. from 2016 to 2020 at École Normale Supérieure de Lyon in France, specialising on superconnecting qubits.

**Konstantinos Karagiannis:**

I remember that paper well. I was using it in my slides way back then when I was presenting on quantum at conferences and things. It was pretty exciting. That was the first time that anyone paid attention to this field.

That brings us to Alice & Bob. Anyone with a background in networking or security will probably recognise the reference, of course, but tell our listeners about the name, the company and its goals.

**Théau Peronnin:**

We started the company and called it Alice & Bob because we wanted to refer to those textbook-exercise placeholders, Alice & Bob, referring to point A and point B in textbooks. I also wanted to avoid at all cost the word *quantum*, which is sometimes polluted by pop culture to mean a magical technology. There is absolutely nothing magical about it. Quantum mechanics is our best description of nature, and building a quantum computer is a well-defined textbook exercise in some sense — a very challenging one, though.

We started the company with the goal to build the first fault-tolerant quantum computer. And at the time, it was just after Google’s quantum supremacy experiment. At the time, there was a booming era for noisy intermediate scale quantum machines, or NISQ devices. And we were somewhat like PsiQuantum, an outlier, saying there is no way to cheat nature. If you want that exponential speed-up promised by quantum computers, you have to go all the way through fault tolerance.

But contrary to PsiQuantum, we decided to take another route, to take a route that aims at simplifying what quantum error correction means and designing a better qubit, a better architecture, to make it easier, more scalable, with fewer qubits involved. And that was all thanks to the technology that we developed in academia, the so-called cat qubit, which is a special kind of superconnecting qubit that is able autonomously to correct some errors — namely, bit flips. And by doing so natively, it actually dramatically reduces the overhead of quantum error correction. And that was the angle we took. And a bit more than three years later, the company is nearly 80 people strong and we’re on a good path to deliver the world’s first logical qubit.

**Konstantinos Karagiannis:**

In all my predictions, I didn’t think, in 2023, we’d be talking about a logical qubit just yet. I thought maybe next year — probably 2025. They’re starting to make headlines, though. There was a Quantinuum experiment recently where they did a logical qubit in that simulation. Can you give a quick overview for those new to the field — some of our listeners are new — of error correction and logical qubits and how it’s traditionally seen? I know you guys take a slightly different approach. Traditionally, what does that mean when you error correct?

**Théau Peronnin:**

The key challenge is that if you want to deliver those well-proven use cases or algorithms where we actually know what the speed-up is going to be, just like Shor’s algorithm, for example, you need to have so many qubits with such a depth of the algorithm that the required error rate has to be something like 10–9 or 10–10 errors per qubit per gate. And this is very far from today’s best qubits, which are 10–3 at best, or maybe, on a single qubit gate, 10–4. But there is this huge gap in performance required to deliver impact, so here, you need something else. You’re not going to improve materials or the level of control a millionfold to get there.

The good news is, there is a way to correct for errors actively, usually through some kind of redundancy. When you do error correction in general, you encode information redundantly. You do several copies of the information, and you do some sort of majority vote saying that you’re going to measure those bits — or actually, if they are quantum bits, you are not allowed to measure them directly. But you can ask, “Do you agree?” between each other. And then you’re going to see if there is some minority that disagree and correct those qubits.

The challenge is that in quantum computing, there are not one but two types of errors: You have bit flips, just like in classical computing, such as switching a zero into a one, and vice versa. But you also have a purely quantum error, called the phase flip, that switches the phase of a superposition from zero plus one to zero minus one, and vice versa. If you need one dimension of redundancy to correct for one type of error, you need a second dimension of redundancy to correct for the second type of error. This will give you the standard approach to quantum error correction, called the surface code, which is the one championed, for example, by Google recently.

When it comes to doing error correction, the leading player is not Quantinuum, because their recent experiment, as far as I understand it, was more quantum error detection. It was not yet quantum error correction, because they were not able to correct, but only to herald whether there have been errors or not.

The player that is the most advanced is Google, with their Nature paper from earlier this year, where they showed that they were just at threshold. This is something that is not discussed enough. Error correction is not only about redundancy. Yes, you’re going to need many qubits, but you also need each of those qubits to be good enough because otherwise, when you add more qubits for redundancy, you add more noise than you improve your capabilities to correct for errors. And this creates a threshold, a tipping point, for the exponential curve. And Google is just at that threshold, and we aim to be much below that threshold next year.

**Konstantinos Karagiannis:**

That’s a great explanation. And I agree — when you add too many qubits to do error correction, you’re introducing all sorts of problems. That was always the allure of the topological qubit that we’re all waiting for too — that low error rate.

What makes your qubit different? There’s the expression “building a better mousetrap.” It sounds like you’re building a cat instead.

**Théau Peronnin:**

The idea, when you think of it abstractly, is that doing error correction in general, or doing quantum error correction, is all about evacuating, extracting entropy. Noise is introducing entropy in your device, and you need a way to stabilise the system and to dissipate this entropy back into the environment.

What we do very differently from everyone else is that everyone else is trying to isolate as much as possible their qubits from their environment. We try to couple them as much as possible to the environment, but in a very controlled way. That way is through so-called two-photon exchange. It manages to exchange energy with the environment to refuel the quantum bits in energy and dissipate entropy without extracting information about the state of the device. And by doing so, by having engineered this autonomous feedback loop — because it’s a hardwired feedback loop directly into the superconducting secret — it suppresses exponentially the probability of having a bit flip at a linear cost on the probability of having a phase flip. You gain exponentially and you pay linearly without any prefactors. You can see how this can be very interesting.

What is a cat qubit? A cat qubit is the state of a harmonic oscillator — in our case, a superconnecting antenna where we are going to encode information into two possible coherent states: the two possible, almost classical states of this oscillator, but ones that have the same energy but opposite phases. And what this two-photon exchange, or autonomous dissipation, does is to create a way to stabilise those two coherent states at the same time without knowing in which one you are.

**Konstantinos Karagiannis:**

The cat, I’m assuming, is Schrödinger’s cat.

**Théau Peronnin:**

It’s properly a Schrödinger’s cat because it’s a superposition of two quasi-classical states. And this is the idea of a Schrödinger’s cat state.

**Konstantinos Karagiannis:**

That’s clever because it’s not as simple as a particle when you’re talking about the cat anymore. It’s a bigger, more complex system — quasi-classical.

**Théau Peronnin:**

The whole point of quantum error correction is that at some point, you need more room to do error correction. When you provide redundancy, you’re increasing massively the size of the Hilbert space. Each time you add a qubit, you double the size of the Hilbert space you’re manipulating. And at the end of the day, quantum error correction burns most of this space to only stabilise a two-level submanifold. It stabilises subspace of that massive tensor product of all those qubits.

What we do with cat qubits — and it’s not specific to cat qubits, it’s all that hardware-efficient bosonic encoding — is start with a system that already has a somewhat large Hilbert space and so we can already do a first level of error correction at the hardware within the qubit. By doing so, it simplifies the whole quantum error-correction scheme because then you’re only left with one type of error to correct for — in our case, the phase flips, because we already completely suppressed bit flips, and we only need somewhat a classical correction code — 1D repetition code.

We proved recently — it was out a few weeks ago in a *Physical Review letter* — an architecture benchmark of this approach showing that it’s 60 times more efficient than your usual surface-code approach. If you want to break RSA 2048, instead of requiring about 22 million of those transmonal standard superconnecting qubits, you would only need 350,000 cat qubits. It’s a massive decrease in overhead here, thanks to this first level of autonomous error correction.

**Konstantinos Karagiannis:**

To be clear, this is not error suppression or mitigation. There’s a difference.

**Théau Peronnin:**

Error mitigation is all about how you can cleverly average out errors so that the average result of your algorithm is correct. But it’s somewhat of a linear answer to an exponential problem, so it can only improve your device so much. As you increase the depth, you reduce exponentially the probability of success, and you’re just rolling more and more dice and hoping to have a chain of six, for example. The more dice you roll, the probability of having only sixes will reduce exponentially.

Even if you do error mitigation or error suppression, you’re just gaining a prefactor in some sense, but you’re not fighting this exponential decay. And quantum error correction is an exponential solution to an exponential problem. This is why it matters that the way you do quantum error correction scales exponentially — all the serious approaches to quantum error correction, like surface code, or the one we’re championing at Alice & Bob with those two levels, autonomous and then repetition code — those are exponential solutions in the sense that they improve exponentially the fidelity of your gates, in buttons that you can control, in control knobs, either the size of the cat or the number of redundancies, the number of qubits involved in your logical qubit.

**Konstantinos Karagiannis:**

A cat qubit — is it just like an extra layer of the quantum hardware stack only. Is this pure hardware?

**Théau Peronnin:**

It is just another layout of the chip, another way to encode information on your chip. We’re doing nothing fancy in terms of materials. It’s not a software trick, because, again, you can’t cheat nature. The coherence happens in hardware, and there is no way to recover it after the fact. It’s about making sure that not a single particle in the universe is aware of or encodes the information about the state of your device.

The cat qubit is just another layout of the chip, and it has somewhat the same footprint as your usual superconnecting qubit and somewhat the same number of inputs and outputs — slightly more in our case. It’s slightly more complex. But the main burden of cat qubits is that from a math point of view, it’s more complex to engineer and to design and to control. It’s a bit more complex to design.

**Konstantinos Karagiannis:**

It’s still basically a chip hanging off of a dil fridge.

**Théau Peronnin:**

If you come to our lab, there is no way you know we’re doing cat qubits rather than transmon, fluxonium — whatever is your favorite superconnecting qubit — we still have a dilution refrigerator operating at 10 millikelvin, and it’s still controlled in the microwave frequency, in the gigahertz frequencies, through a rack of classical electronics of FPGA boards. It’s very standard in that sense. We’re only doing fancy things in the design of the qubit and the architecture of the chip.

**Konstantinos Karagiannis:**

How does it end up being gateable, or an addressable qubit for a circuit? Is there, at that point, some kind of software stack that’s needed to turn this into a more standard qubit to be read out?

**Catherine Lefebvre :**

It’s just like all your favorite superconnecting qubits. It’s a gate-based approach to quantum computing. Since we have this very peculiar noise — we have almost no bit flips and a bit more phase flip than usual — you need all your gates to preserve that natively. And, for example, you’re not allowed to do a Hadamard gate because by definition, the Hadamard would transform a bit flip into a phase flip, and vice versa. And our whole architecture assumes that you have no bit flips.

How do you recover a universal set of gates here if you’re not allowed to do a Hadamard gate at the physical level? The first thing you do with cat qubits is change them in a repetition code, and then you obtain a logical qubit on which you can do a set of universal gates. But making error correction comes before everything else, and then you can get a universal set of gates.

In our case, we published the native set of gates we’re planning to use, but it’s not different. What is fun with our device is because those are coherent states, we’re stabilising while we have to engineer Hamiltonians, or the way the qubit behaves that are slightly more exotic than your usual transmon. But from a user perspective, this is completely transparent. The burden is on our side in the way we design, in the way we control, but what we expose for the software programmers out there is just a set of universal gates.

**Konstantinos Karagiannis:**

To get to that universal gate level, is there a reduction from the number of cat qubits to logical qubits?

**Théau Peronnin:**

Yeah, the usual order of magnitude — it depends on the hypothesis and the targeted level of performances — is that you need about 30 cats to get a logical qubit, so chain a 1D chain of about 30 instead of about 1,000, because you get this square root factor in the overhead in the ratio physical to logical. That’s the whole selling point of this approach.

**Konstantinos Karagiannis:**

Thirty — that’s impressive. That’s a small number to get to logical.

**Théau Peronnin:**

But what we’re working on at Alice & Bob is not to be satisfied with that number. There is still a lot of room for improvement here. We believe we can do more offensive things in terms of the way we do error correction because doing a repetition code is quite the most naive way to correct only one type of errors. You could go for LDPC or a more fancy thing, but then it requires a different type of connectivity and comes with its own challenges. But we’re working on getting those numbers always better.

**Konstantinos Karagiannis:**

And then, when you get the logical qubit, is it going to have limitations on how many qubits it can be entangled with?

**Théau Peronnin:**

The whole point of having a logical qubit is to be good enough so that you’re not limited in the depth of the algorithm you can run.

It’s the ideal, or the theoretician, approach to quantum computing, the one that forgets that there is noise at some point happening, and this is where we know what we’re doing. NISQ is very exciting for many reasons, but the challenge is that we are lacking formal proofs of what speed-up to expect depending on the level of noise, whereas in the realm of fault-tolerance quantum computing, there you have an algorithm where you can prove what type of speed-up you’re expecting and what level of noise you can sustain. It’s the realm of certainties, but it comes with greater challenges in terms of engineering, for sure.

**Konstantinos Karagiannis:**

I saw that cat qubits are already available in emulated form, in Eviden’s Qaptiva dev platform. How many physical cat qubits, though, do you have running in the lab right now?

**Théau Peronnin:**

We recently published a single–cat qubit experiment, but we’re working internally on a small error-detection code, the smallest code you can do. To do error detection, what you need is just two cats plus an ancilla. In our case, we are adding two ancilla for technical reasons. We’re working on a four–cat qubit device, and there is a press release on it in French at some point, but we presented it at the last March meeting.

With that device, we’re testing all the building blocks of our logical qubit: the way we’re going to prepare the states, the way we’re going to do the entangling gates and how we’re going to extract this error syndromes. And we’ll be scaling, then, to six to 14 cats. We’re planning to release our 14–cat qubit device next year. And this should allow us to demonstrate a proper logical qubit based on this two-level error-correction scheme: first, autonomous bit flip suppression, correction, and then a 1D chain to correct for phase flips, and showing this nice beginning of exponential reduction in phase flips.

**Konstantinos Karagiannis:**

Right now, you were thinking 30 was required for logical, but you’re saying next year, if you put out a 14-cat system, it’ll probably be one logical qubit with 14 cat qubits creating it, give or take.

**Théau Peronnin:**

What is the definition of a logical qubit? There is no clear reference there. The goal for the company, and the way we view logical qubits, is a qubit that is good enough for almost every industrial application — something that gets to the 10–9. But to actually claim that you’re well on the way to get there — to prove to have a very strong prototype logical qubit — is to show that as you increase the parameters you’re controlling, the number of qubits and the level of redundancy, you’re suppressing exponentially the remaining errors. What Google did recently is show that they’re just at the threshold. As they increase, the remaining errors remain constant. What we want to prove is that as we increase the number of cats, we decrease exponentially the remaining errors.

**Konstantinos Karagiannis:**

That makes sense. You’ll need, obviously, multiple logical qubits to have gates and prove everything out. What are the biggest challenges you see to scaling?

**Théau Peronnin:**

Here, you have to know what we’re good at and what we’re not the best at. We think we’re good at designing our qubit. Not that many groups out there do so. We’re good at designing architecture, and we’re somewhat close to state of the art in scaling and wiring and increasing the size of the device, but we’re not the best out there. Our goal for the next two to three years is to scale up to about 40 cat qubits. And this will allow us to prove all the hypotheses of our architecture. With 40 cats, you can either operate it as a very long-lived logical qubit, or several short-lived logical-ish qubits, and demonstrate how you’re going to do routing between them, how you’re going to do logical gates between them, and prove all the architecture.

Now, some players are better than us at scaling superconnecting qubits. And the challenge here is to, as you scale and move towards industrial-scale processes of nanofabrication, you remain at the same level of quality. The fewer errors you begin with, the fewer you have to correct for. You need to still have good qubits. And then, as you scale, you’ll have yield issues. You might want to start thinking about how you’re going to connect chiplets, and then how you’re going to connect different fridges together to operate all of them as a single large-scale quantum computer.

**Konstantinos Karagiannis:**

An interconnect solution.

**Théau Peronnin:**

Yeah. There are plenty of ideas out there on how to do that. From an industrial point of view, what might surprise you is that for me, the biggest challenge is not about the physics, about the nanofabrication or the fridges. It’s about the control electronics. At the moment, the main bottleneck in terms of the price of a quantum computer, at least a superconducting one, is the cost of the control electronics to control all those qubits. And here we need to gain at least a factor of 10 — more likely, a factor of 50 — on the price per qubit of control electronics. Some challenges there, maybe moving from FPGA to ASIC. There is room for improvement.

**Konstantinos Karagiannis:**

To key off something you said that was interesting, it sounds like you might be able to have a system that depending on use case, you shift how many are needed to make a logical qubit. There could be some use cases where you can get a few logical qubits because they don’t have to be that long-lived, but maybe for something more challenging, you get fewer. Is that true? You think you might be able to have that variability depending on use case?

**Théau Peronnin:**

Actually, many architecture out there will have these capabilities, and this will come with some very exciting developments in how the compilers work, how the software stack leverages that reprogrammability. There are some very clever ideas on how to best exploit all that flexibility.

**Konstantinos Karagiannis:**

That’s rarely put forward — that idea. When people guess the ratios they need for their companies, you don’t hear IBM say, “We need 1,000 for a logical qubit, but if you’re running fraud detection, you only need 500.” You never hear that. I found that interesting.

**Théau Peronnin:**

It makes sense. When you’re racing to get to the Moon, you start by thinking about how you’re going to get there, not how you’re going to decorate your house on the Moon — first things first. And there are still some major challenges in delivering these first logical qubits. There is a lot of underappreciation of how profound a scientific milestone a logical qubit is. It’s going to be the first device to escape decoherence, somewhat similar to Sputnik escaping gravity and being in orbit. A logical qubit is an abstract device that manages to be decoupled from the rest of your universe, to have no causal link, no information leaking out of it to the rest of the world. Philosophically, it’s going to be a very profound milestone. Technologically, there will still be a lot of work in scaling the devices.

**Konstantinos Karagiannis:**

It will be the first time that a qubit starts to feel a little spooky. It’ll start to feel a little more like what David Deutsch was thinking about, I believe, when he was first talking about quantum computing.

That’s a pretty great note to end on. Are there any use cases you imagine this machine being ideal for out of the gate, even though it’s a terrible term to use?

**Théau Peronnin:**

The fun thing about cat qubits is — and this is why we put it on Qaptiva: As a physical qubit, it’s not a very good one for NISQ. It’s terrible because you have too many phase flips and no bit flips. But as you get to a logical qubit, it becomes the ideal platform.

I believe as we’re helping more and more industrial players identify their use cases, when you actually put the numbers on use cases, most of the time, you end up requiring logical qubits. When you look at Monte Carlo for finance, or battery design or even Shor’s algorithm, wherever you look, you end up with requiring levels of performances that are 10–7, 10–9, 10–12 errors per qubit per gate. With a few thousand of those qubits, maybe you can do a few hundred for chemistry. Cat qubits have the potential to become the universal fault-tolerant quantum computer basis for all those use cases, but they will definitely not be championed in the NISQ era.

**Konstantinos Karagiannis:**

Thanks so much for your clear answers. It’ll be pretty exciting to see what comes of this. I’ll definitely be keeping an eye on it. Thanks so much for sharing this with our listeners.

**Théau Peronnin:**

Thanks a lot for having me.

**Konstantinos Karagiannis:**

Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

Alice & Bob are well-known placeholder names for senders and receivers in cryptographic setups and thought experiments. The company of the same name is focused on building fault-tolerant quantum computers. Taking inspiration from another thought experiment, that of Schrödinger’s cat, Alice & Bob has designed what it calls cat qubits. Just like Schrödinger’s cat represents a quasi-classical system, cat qubits are also quasi-classical. The hope is that these qubits will pave the way toward error correction and consequently provide a means to exit the NISQ era.

Cat qubits are created by having two quasi-classical states in a harmonic oscillator. Interestingly, while other modalities try to isolate qubits from the environment, Alice & Bob is working to highly integrate its cat qubits in its environment. The two-photon exchange used to do so dissipates entropy without revealing the state of the device. Superposition is maintained longer in theory this way, and the chance of a bit flip or a phase flip error is exponentially reduced.

Cat qubits are expected to have lower required overhead to create logical qubits. Depending on the type of problem being solved, the number of cat qubits needed to create logical qubits can even change. Expect early logical qubits based on this technology to have physical-to-logical ratios that range from 14:1 to 30:1. The resulting logical qubits will be able to run universal gates. It will be fascinating to see these running in a production machine.

That does it for this episode. Thanks to Théau Peronnin for joining to discuss Alice & Bob, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s *The Post-Quantum World*, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow #ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.