Transcript | Using AI to Improve Quantum Computing with Q-CTRL

AI is disrupting almost every industry right now, so it’s not surprising that machine learning techniques are being used to improve quantum computers and accelerate the coming of fault-tolerant systems. Applications may have an advantageous business impact sooner than we thought. Join Protiviti’s Konstantinos Karagiannis for a chat with Michael Hush from Q-CTRL and learn how they’re squeezing a whole lot extra out of the quantum software stack. 

Guest: Michael Hush from Q-CTRL

K. Karagiannis:

AI is disrupting almost every industry right now, so it’s not surprising that machine learning techniques are being used to improve quantum computers and accelerate the coming of fault-tolerant systems. Find out how in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

Our guest today is the chief scientific officer of Q-CTRL, Michael Hush. Welcome to the show.

 

Michael Hush:

Thank you for having me.

 

K. Karagiannis:

I love that title. It’s almost like Star Trek. A little bit off, but almost.

 

Michael Hush:

I started with the company five years ago, but then got promoted two years ago, and when I first heard the title, I was, like, “That sounds so fancy.” I love it. I was very happy.

 

K. Karagiannis:

Tell us about Q-CTRL for those who haven’t heard our episode with Michael Beersick.

 

Michael Hush:

We create software infrastructure for quantum technology companies, and our software improves the performance of those devices using quantum control. Our software brings forward the ability to make these quantum devices practical and useful in the real world today. As a more concrete example, we have used our software to improve the performance of commercial quantum computers versus what is available today by a factor of 9,000. We improve the probability of that quantum computer returning the correct answer. We also build quantum sensors. On our quantum-sensing side, we’ve just done a demonstration of boosting the performance of a quantum sensor so it gives an improved precision of a factor of 23 when operated under environmental and platform noise.

We’ve been around for five years now. We’ve got 90 members around the world in Berlin, Sydney and LA. We just opened an office in London. We just also finished our latest Series B round, which happens to be the largest Series B for a quantum software company in the world. We’re very happy about that, and we’re using that opportunity to continue to do good work and improve the performance of quantum devices that are available today.

 

K. Karagiannis:

That’s the only letter that hasn’t been replaced with Q so far. It’s a Series Q round.

 

Michael Hush:

Hopefully, we’re around long enough.

 

K. Karagiannis:

How did you get interested in, and find your way to, quantum computing?

 

Michael Hush:

I started studying quantum mechanics from undergrad. I entered at the end of high school. Quantum computers had already started to be talked about, and there was this talk from a professor at ANU, where I actually ended up doing my undergrad, about quantum teleportation, and I got hooked. I don’t know how to describe it. Quantum mechanics was so interesting to me, so I started studying it from undergrad. But when I finished my Ph.D., which was around quantum control, it was the beginnings of when people started talking about, how do you control these devices? And I was one of the early PhD students who got them to do an entire topic on controlling quantum systems, not just understanding the science around them.

At that point, I just wanted to keep working on the quantum topic, quantum mechanics. In particular, I wanted to work on how to make them real, how to make them available, how to get other people to experience what I had been studying and understand how they could change your lives — these tiny things which have fascinating physics, which you wouldn’t expect end up being able to be used to change how we do cryptography, change how we optimise problems, change how we sense the world around us.

After my Ph.D., I hopped between postdocs. Eventually, I was a lecturer, mostly because at that time, that was the only way you could continue to stay in the quantum industry — by being an academic and studying and researching it. And then, very fortunately, I started working on some software myself, using machine learning to improve these quantum devices. At that point, fortuitously, a person I knew, Michael Beersick, a professor at University of Sydney, started Q-CTRL. I was there on the first or second day that the company started. He contacted me, and the timing was just right. It was an opportunity to move into industry and make quantum computing real — I just had to take it. And ever since then, for the last five years, I’ve been running the research program at Q-CTRL and proving out these control techniques. It’s been a very good journey so far.

 

K. Karagiannis:

Yes, and it culminates in that title. Like you said, it’s awesome. You mentioned AI and machine learning, and obviously, right now, this is the big buzzword — everyone’s interested all of a sudden. AI has had so many resurgences over the years.

You have some synergistic use cases. There are quantum ML and quantum-inspired tensor networks and things like that to train LLMs. Before all of that, about a year and a half ago, your team published a paper on using deep reinforcement learning for designing error-robust logic gates. Here’s your platform to tell us about that and its implications today.

 

Michael Hush:

There were multiple waves with machine learning. Before I did this reinforcement learning project at Q-CTRL, there was this initial wave seven years ago, where TensorFlow became available, and the democratisation of these machine learning tools started. At that point was when I started engaging in the field, because suddenly a physicist could be able to do certain machine learning tasks that only a real expert in machine learning could do in the past. We started looking at, how can we use these machine learning techniques to improve automated closed-loop optimisation?

You’ve got a quantum science experiment. The first one I started working with was with something called a Bose-Einstein condensate: Take a gas of atoms and cool it down to the microkelvin, which is much cooler than even the vacuum of space. And then, all the atoms suddenly condense into this mega super atom called a Bose-Einstein condensate, which is a new phase of matter. Trying to make these Bose-Einstein condensates involves careful manipulation of how you pull them. And we thought, “This is an annoying task that people in the lab have to do all the time, and we don’t even know if we’re doing it as well as we can.” We tried to get a machine learning agent to take over and automate that optimisation and see if it could create a Bose-Einstein condensate. I did this with a research team at ANU.

And the other thing that was interesting about the project is that I wrote software that has been executed in the lab, and it was the fastest project I’d ever completed. The AI agent successfully figured out how to make a Bose-Einstein condensate, and we managed to do it faster than any other automated optimisation loop had done beforehand. And importantly, the whole experiment between theorists and experimentalists — the collaboration where, normally, theorists come up with ideas and then they have to figure out how to have the experiment — we did that in, like, a month because I wrote the code and they executed it.

 

K. Karagiannis:

No middleman.

 

Michael Hush:

No middleman — exactly. That taught me a few things. That taught me the power of software, the ability to make something easy to use and to make very complex, very tricky machine learning techniques — like, usable, out-of-the-box, with one line of code. That taught me that is the way to get value across to these experimental things.

Then that project went well, and we did an extension of that. We then moved to a deep neural net, and we did that with a quantum memory. It was a research project with a different group at ANU. But quantum memories are used to store quantum information for long times. In this case, the deep neural net found a way of storing quantum information manipulating some laser pulses in this gas of rubidium atoms, and it did it in a way that no one knew how it did it, but it did it better than any person in the lab had ever tried before.

We began to see this potential, and then we were trying to improve the actual fidelity of two-qubit gates on superconducting devices, and at that point I’d learned a bit more about the framework of machine learning. Previously, we’d been focusing on closed-loop optimisation, where you look at the quality of your outcome with just one number.

When we learn more about reinforcement learning, the big opportunity in the context of machine learning is that if you put a reinforcement-learning algorithm in charge of your quantum device, it has a chance to look at the state of what’s going on the device before the final point, when it gets the particular final quality measure of that gate, the fidelity of that gate. That was interesting because it showed that the machine learning technique could try and begin to understand in some sense what the physics of the system were in order to achieve the gate it was trying to achieve.

But we humans, as we were trying to fix it, we were finding ourselves hitting a limit. We couldn’t use our models and understanding of the device to get an improvement of these two-qubit gates much beyond what was on them. We took a reinforcement-learning algorithm and put it on a superconducting device and then put it in charge for directly interacting with the experiment, figuring out what the state was of that experiment over time until finally it improved the fidelity of the gate. After a lot of work with figuring out the best reinforcement-learning algorithm to use for that context, we found we were able to get improvements to the quality of the gates. It was a 30% boost in the quality of our two-qubit gates in these devices.

And then, at the time, we were looking at a swap operation. Swap operations involve three CNOT gates. They are very common across most algorithm embeddings, because when you have a real device, not all the qubits are connected, so you have to swap information around. And we managed to achieve the biggest improvement to a swap gate as well that was out there at the time using this reinforcement-learning technique.

That was what started Q-CTRL at applying these techniques on rail devices, and now, that reinforcement-learning technique forms part of our automated tool chain, which we use in our products to improve performance of devices like Fire Ripple. Today, when you go and use Fire Ripple to improve whatever platform you happen to want to be targeting, whether it’s IBM or Getty or another provider, in the background, we’re using those reinforcement-learning techniques to calibrate the data.

 

K. Karagiannis:

That was a remarkably clear answer. That was terrific. Anyone listening should be able to visualise that. Has that evolved? Machine learning has obviously evolved in a year and a half. Have you found any more gains that you’ve achieved in recent times?

 

Michael Hush:

If we’re talking about big waves of movement and changes, there’s that GPT-3 — that was another moment in time which was incredibly exciting. We have these AI algorithms related to transformers, large language models and generative AI. First, we started using ChatGPT, so we all started experiencing it and getting very exciting about it. It has been a topic of discussion internally. We have been thinking, where would this affect things? How can we plug it in?

I’ll start with negatives and then the positives. The big thing that’s happening right now in AI is, the data is massive. These models are being trained on huge amounts of data. And what we’re seeing in quantum devices — which was a challenge for the reinforcement and learning and where we got stuck to an extent where we can’t push more — is that we found that if you’re trying to directly learn from these quantum machines, there’s not enough data. They are noisy, they are very hard to access — there are not a lot of them in the world. And the amount of information you can get off them is relatively limited because you’re typically competing with all the other people who want to use these day to day. That, at the moment, is the big gap.

If you look at where AI can help in the quantum bulb right now, how do we try and figure out how we can get as much data as being used training these other models in a place where it could be of a meaningful and useful place to improve an actual quantum computer? For us, at the moment, we’re thinking more around the compilation stage. The compilation stage is a significant challenge. At the moment, compilers are pretty good up to about 100 qubits. And then you begin to see that all the algorithms we use today are not going to scale appropriately in terms of their speed.

There is a big opportunity for AI and machine learning techniques to improve the speed of that compilation process. And it’s a place where you can generate a lot of appropriate data. You can do a lot of compilation on a classical computer and produce massive data sets which are then appropriate for training these machines. And there’s an opportunity for those machines to simplify those optimisation algorithms. That’s where we’re looking at putting the most effort, and we’re excited in terms of the near term.

Longer-term, the other thing that’s exciting at the moment is transfer learning. When we initially did this reinforcement-learning technique, transfer-learning techniques were thought about, but there wasn’t a clear winner. But now we’re seeing impressive results where another aspect of ChatGPT is that it’s trained on this massive amount of data, but then the safety mechanisms they put in are actually done through humans interacting and retraining these learned models how to correctly behave nicely. And that’s done through reinforcement-learning techniques to transfer learning about how you take AI, which is just doing anything unlimited, and make it behave well.

Similarly, we’re tending to think now on the quantum side if we can train with simulation. The simulation data, at the moment, on quantum computers is much more easy to generate than actually using the devices themselves. If you can get some training of your AI agent with all of these simulations, then eventually, you could use transfer learning to try and reduce the amount of actual online training you need to do and get a good performance.

 

K. Karagiannis:

I was going to ask you so, I’m glad you gave a little quick explanation of transfer learning for those who don’t geek out on machine learning. Would you say, then, that for training on the simulated data, you would introduce noise to the simulation so that way, the model understands how the physical computer would work too?

 

Michael Hush:

You need to introduce noise. The other fun element about quantum computers is that their noise is unbelievably difficult to simulate. This is something that was very important in our process of discovering how to improve devices: A lot of the noise models that are proposed in the simulation packages, like in Qiskit, they often use noise that is easy to simulate. The noise that they generate is very simple. It’s the type of noise you would get classically. It’s the type of noise which often looks incoherent.

But on quantum devices, what we’ve discovered consistently is that there’s a huge amount of noise on these machines which is coherent. Coherent noise has this fantastic opportunity to be suppressed and removed, and that’s something that we do. But in terms of simulating the realistic dynamics of your system, coherent errors are very difficult to simulate. That, again, is this challenge when we’re trying to train these AI agents — getting that just right, where you can train them as much with the simulations that are meaningful and then recorrect them on the device with the reality of these quantum machines. Also, as you get larger, they just can’t be simulated efficiently either. It’s a tough problem.

 

K. Karagiannis:

And machine learning has a way of surprising us too. Machine learning, sometimes it finds things we didn’t think it would. For all I know, there is some database of actual runs versus expectations, and machine learning will figure it out.

 

Michael Hush:

I agree with that. This is more with our initial work with the quantum memories, but I have worked on teams where the machine learner is able to, if you give it direct access to a machine, find something that you weren’t expecting. And I remember us having many lunchtime debates, looking at pulses that came off this machine learner, being, like, “Why on earth is it quenching it and releasing?” None of us could figure out how it was achieving this particular physical dynamic. There is a lot of opportunity there. The other way to put it is, as there are more machines, as there’s more time available, there’s more opportunity for machine learning to get the data required to make these discoveries.

 

K. Karagiannis:

It sounds like you’ve considered lots of layers of the software stack that machine learning can do, from preprocessing the data to better simulations and all these things. Do you think there’s any room for machine learning to actually optimise the running, like some control circuitry that it’ll change?

 

Michael Hush:

We use AI and machine learning techniques through our stack, both when we calibrate and tune up our machine. Initially, we use machine learning techniques when we’re running it, how we encode our control techniques and how we mitigate noise at the end of the process. We use neural nets for a lot of that work as well. We already are using machine learning techniques to improve the performance of quantum devices which are run today. It’s only the beginning. There’s certainly a lot more that we could potentially learn from. I’m interested in learning directly from the machines. They’re already very difficult to understand because there’s so much complex physics — especially at scale, things get very complicated. I certainly see opportunities to discover potentially better optimisations or better embeddings of particular algorithms. We’re excited to explore that.

 

K. Karagiannis:

You didn’t do yourself any favors. Your paper was, like, “We’re going to prove it—9,000x.” Where do you go from there? If you come up with 1,000x, they’ll be, like, “A thousand — good job.”

 

Michael Hush:

Our big challenge is, we want to keep the 9,000. There is a challenge where sometimes we’ve had to say 1,000 because 9,000 seems too big. But the key thing is the device size. That’s our push. We’re pushing to make sure that our control techniques, you see these results scaling as you go up. We’re happy to see that typically, the benefit gets bigger and bigger as you use more qubits. But we’re looking for 100, we’re looking for 200. We’re going for those big numbers. That’s where we want to see our control techniques be efficient so you get an answer very quickly.

 

K. Karagiannis:

For people who dig deeper, they’re going to come across ideas of error correction and error suppression. Can you explain the difference between those?

 

Michael Hush:

There’s quantum error correction. There’s error suppression and error mitigation, which are becoming common now. Quantum error correction involves encoding with multiple qubits. A single piece of information gets encoded across multiple qubits, and then you use that degeneracy to detect if an error has occurred on one of those qubits.

And then, once you have detected it, the encodings are designed so that you can apply a pulse and correct it. Using that degeneracy and quantum error-correction process, the important thing is that there’s this fault-tolerance theorem where if you are able to get your gates to work well enough and you combine it with quantum error correction, you’re able to scalably run a quantum computer for a long, indefinite time. Basically, you can run long programs. This is the aim. We’re all heading there. This is where we’re heading as well with Q-CTRL, where we were already working with quantum error-correction algorithms, and we’re ensuring that everything we do is compatible with it. And I’ll get back to that.

The next one is quantum error suppression, and this is what we focus on at Q-CTRL. We look at an individual qubit. We try to actually understand the physics of the device. If your qubit is experiencing an error, that error might occur because there might be an incorrect detuning or there might be some other component in the system where it gets an error because it’s connected to some other two-level system nearby or something, some type of coherent interaction. That qubit will then accumulate an error because the particular micropulse might have been wrong, or it’s connected to another system and some of the energy gets mixed up with it.

The cool thing about error suppression is that you can use your understanding of the physics of the device to cancel that out. If you happen to have a detuning on your qubit, you can flip the qubit halfway through. And as the error was accumulating in one direction, if you flip the qubit, that error will actually undo itself. And in a sense, you get this interference or this error-suppression technique where, using your understanding of the physics of the device and the way you implement your algorithms, the way you implement your gates, you can get errors on the system to cancel themselves out.

The great thing about error suppression is that if you can run this simultaneously on a device, it doesn’t require any additional executions, and it doesn’t require any extra qubits. It will allow you to get rid of errors in a way that doesn’t require additional resources. That’s one of the key things that all of our products at the moment are built around, and that’s why if you run it on a commercial device, the actual execution cost of the machine is one execution. It doesn’t cost you extra to run these things.

The challenge is that error suppression can’t get rid of everything. There is always going to be some noise left. But the good news is that error suppression is compatible and enhances quantum error correction. If you add quantum error suppression, you can eliminate quite a significant amount of the errors before you add your quantum error-correction code. And the cool thing is that quantum error correction has this thing called the threshold theorem, where it needs to have a certain error rate before it kicks in and gives you a benefit.

Quantum error suppression can get you there more quickly, because it’s removing many of the errors across the device and it does so in a deterministic way, which is done with a single shot. You can combine that with quantum error correction and get a quantum error-correction protocol that is more accurate and runs with the same device — but earlier, because you eliminated many of the errors. As an example, we’ve demonstrated that our quantum suppression techniques can boost the performance of a quantum error correction in the sense that it improves the probability of detecting the correct error. And we improved that by a factor of 2.7. The quantum error-correction protocol is able to detect errors better.

A lot of people talk about correlated errors being a big problem for quantum error-correction techniques because if your correlation is bigger than your code size, your error correction can’t overcome it. The great thing about suppression techniques, they can decorrelate your noise — they can make it uncorrelated. And that’s another benefit you can see in improving quantum error correction.

Finally, there’s another significant movement going on now around quantum error mitigation. The mitigation is about running extra — you don’t increase number of qubits. You use the same number of qubits, but instead, you do more executions on your opponent computer. And you may use a technique like zero noise extrapolation or PEC, where, in some cases, you’re executing a circuit multiple times with different lengths of gates or some other modification. And then that modification gives you an insight into the noise, which you can then try and remove, and try and get a better picture of what’s going on. Error-mitigation techniques are another way to suppress errors, but they do so at the cost of you having to execute more times.

Again, we’re in a fortunate position where quantum error suppression boosts the performance of mitigation. You can use suppression on top of a mitigation protocol. Suppression gets rid of most of your errors, and then the mitigation technique will take care of the few that remain. But the challenge going forward is, for people who are working on mitigation techniques, it’s not trivial to connect them to quantum error correction. Because they require multiple executions, it’s not obvious how you would then combine that with a quantum-correction protocol. As for quantum error correction, you get one go — you’re going to detect the error and fix it. You don’t get a chance to reset your computer and start again.

Those are the three error techniques that are out there today. We’re fortunate — we work on error suppression, which happens to boost the performance of any other technique you want to use. Quantum error mitigation is definitely what’s happening now, because it’s the only tool we have. Unfortunately, the devices are not big enough where we can use just quantum error correction today, but it’s a good tool to get us closer to early demonstrations on NISQ applications. Getting those computers to go a bit longer, a bit deeper, it’s another tool on our belt to help us get there.

 

K. Karagiannis:

And of course, we’re already used to running multiple shots anyway, so mitigation isn’t that unusual a concept. No one runs once and is done. It’s all there.

Do you see any future improvements to suppression yielding a lower number of physical qubits required to get a logical qubit? Do you think that would have that kind of effect?

 

Michael Hush:

That is the aim and hope. It should do so by reducing the amount of errors. By improving the quality of the qubits, we should be able to get demonstrations of quantum error correction giving a significant benefit earlier rather than later. And you do see variations of the techniques that we use being used in the state-of-the-art quantum error-correction protocols that are done today. Quantum error suppression is already a necessary element in doing state-of-the-art quantum error-correction protocols. We’re confident that we’ve got some of the best stuff out there. We’re looking to make sure that our techniques can also help with that demonstration. It’s an important research goal for us.

 

K. Karagiannis:

Anything to get away from a 1,000:1 ratio for errors. We don’t want that. That’s too high. We’ll never have fault-tolerant machines that way. Until all this hits — until we’ve got it down to, I don’t know, 16:1, maybe until we get there, it’s going to be the NISQ era, like you said. And what do you think these early machines are most useful for now?

 

Michael Hush:

I’m quite fortunate where I don’t have to make a decision myself. The good thing about being at Q-CTRL is that our aim is to help everybody else get there. And we’ll support you no matter what you’re trying. I’ve refrained from trying to look into the future myself and figure out what this is. But we’re seeing a trend. There’s a significant trend where people are looking to use hybrid quantum algorithms, which means you have a regular, traditional, standard computer, and you’re combining that with a quantum computer. You’re letting the standard computer optimise a circuit on a quantum computer to solve some type of optimisation problem or some quantum chemistry problem. These are the kind of techniques we’re seeing, which are popular.

On that front, I’m expecting that with the initial demonstrations, it’s going to look like hardware is also improving the speed at which they can run these hybrid algorithms. We’ve been working on further control techniques to target hybrid algorithms in terms of getting that performance to go faster. And in terms of the business value, you’re seeing that these optimisation problems are best aligned with business cases as well. Those are most likely to be the first application.

That was very abstract. Let me also give you a more concrete one: Something that we also do is working on applications, because we find that having one part of our company looking at a full stack in terms of application down to the bottom is good hands-on experience for us to understand the challenges with machines and how to push forward. The work that we do is with transport for New South Wales. They run the train lines and all the public transport in my home state where I live in Sydney, and we’re working with them to optimise the transport routine — your bus schedule, your train schedule.

What we know is that if you look at the current state-of-the-art optimisation techniques which we use to figure out how to design these systems, they’re suboptimal by about 1–2%. And when you bring that up to the scale of an entire city, that lack of optimality can be billions of dollars per day. We are experimenting with how a quantum computer can accelerate and solve that problem and figure out how you can get from A to B in your daily transport life — how you get from work to home — how to do that faster on a bigger scale, on the designing-the-network scale.

Specifically, we’re looking at the capacity and vehicle-routing problem and trying to improve its performance on a quantum computer. That’s the kind of application where you’re most likely to see these benefits first. You’ve got a clear connection to a business case where you’re doing some type of optimisation and it’s targeted at a hybrid algorithm. It’s running a classical computer and a quantum computer together to get the best of both worlds.

I’m trying to make some estimates around when that’s going to happen. I know that’s a very risky thing to do, but at the very least, it’s looking like we’re going to have quantum computers which will be able to be available commercially, which will be able to do something that you cannot do with a simulation on a classical computer. And that’s most likely to happen next year. That’s going to be that first threshold. And then, after that, it’s a matter of figuring out, where do we get this actual advantage? That’s the second significant hoop to push through, and that is harder to predict.

 

K. Karagiannis:

You don’t have some use case in mind for the future that you think will be the first big advantage. You’re hedging your bets here with something you can see practically right now — this optimisation next year. I agree.

 

Michael Hush:

Exactly. Do you have any opinions around this? I’m curious.

 

K. Karagiannis:

Once we have strong error correction, something amazing in simulation might happen too. That’s when we might have these chemical type of engineering feats in protein folding — things like that that would just blow someone away — in a lab.

 

Michael Hush:

That’s the key. You want error correction. This is the culture that we have at Q-CTRL as well. It’s simple. We’re doing two things: We’re keen on error correction, and we’re keen on applications, and we’re definitely not committing to either, because there are two good bets there. But you need to be doing both right now. We need to be thinking about error correction as well as these near-term applications. Otherwise, if you’re not thinking fault tolerance in the long term, you’re not positioning yourself for the long-term journey that you need to be on.

 

K. Karagiannis:

And a good way to wind this down now is to think back to how much AI and ML we’ve discussed. Are any of those use cases exciting to you? Do you see any way that quantum ML will help along AI in general or something like that? So far, it’s been about AI helping you get a processor working right.

 

Michael Hush:

First, there’s the more academic-theory side: We know that quantum machine learning techniques can have an advantage of learning quantum information, and that is much more on the academic side. But that’s the known advantage that we can prove out. And there is interest more on the quantum sensing side. There is an idea where if you have a quantum device interacting and sensing something where it gets entangled in a complex way, you might be able to use quantum machine learning to extract information from there. That’s interesting, but it’s very forward thinking. But I still think it’s very good research, and it’s something that should be thought about.

On the more practical side, what you’re seeing more in industry, you see people combining some type of deep neural net, and you’re replacing a layer with a quantum computer. We know that more expressive layers in a machine learning technique can give a big advantage. And there is something very interesting and exciting about putting a quantum machine in there which is able to create correlations which just would not be efficiently creatable with a regular device. It’s an expressive layer in that deep net which would be doing something very interesting.

It’s an exciting thing to be looking into, but from the people I’ve talked to, the challenge is almost benchmark. People are getting exciting results there, but then it’s difficult to benchmark that against another actual machine learning technique, because it’s doing something new and interesting and very different from a regular neural net.

But what is your classical benchmark? That’s the big challenge that they’re trying to figure out now, because the other thing that quantum machine learning techniques involve is stochastic memories. Ultimately, a quantum computer is inherently a probabilistic element in your system. And I’d say stochastic machine learning or nondeterministic machine learning layers is also not necessarily a deeply understood field with a lot of promise as well. It’s the interesting challenge in the machine learning cases. There are people who are seeing advantages, but they can’t quantify how real that is right now compared to a classical benchmark. It’s definitely an area to continue to look at and improve.

In terms of our interest specifically, what we’re interested in doing is, when you’re training these neural nets on a quantum computer, you want to make sure that when you go and use them later on, that quantum computer reproduces that particular neural net the next day. The challenge is, these devices are shifting and changing. They’re not a fixed computer every day. They’re not necessarily reliable in that sense. But we use control techniques, and that’s the number one thing that control techniques give you first: We can take something which is shifting and changing and make it behave the same every day and more consistently. That’s my interest right now — we’re figuring out how we can look at these quantum machine learning algorithms and use control to make sure that you train once and then you’re able to use that net reliably the next day and it gives you that same result that you trained it on.

 

K. Karagiannis:

That’s a terrific point. There’s nothing more frustrating than adding a hidden layer that may or may not even be in our universe tomorrow. That’ll introduce some nondeterministic issues there.

Thank you so much for joining. You gave such terrific answers. A lot of listeners are going to love to dig into everything you said. And I look forward to seeing your 90,000x-improvement paper, which is coming.

 

Michael Hush:

That’s the next step. Thank you very much.

 

K. Karagiannis:

Now it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

Q-CTRL uses software to improve the performance of quantum devices. Michael started experimenting with TensorFlow before the company was formed and saw the possibilities of machine learning. This carried over into his work at Q-CTRL, starting with his team improving the fidelity of two cubic gates by 30% using machine learning. The technology eventually became Fire Opal and is used to improve numerous commercial quantum computers.

Applying ML across the software stack, the company is working on improving compilations and the calibration and tuning of systems. AI might even be able to better simulate the noise in quantum computers for more realistic simulations while algorithms are being developed and tweaked. While we’re all waiting for error correction, Q-CTRL is working on error suppression, which aids the former. Error-suppression techniques look at the physics of a device and help detect and reverse certain types of errors that become introduced in computation. For example, something as simple as flipping a qubit mid-calculation can undo a potential error that was accumulating. This produces better results without extra qubits or repeated runs. The ultimate goal is, of course, practical advantage in real business applications as soon as possible.

That does it for this episode. Thanks to Michael Hush for joining to discuss Q-CTRL’s performance-improving approaches, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com or follow @ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...