Transcript | Neutral-Atom Quantum Computing with QuEra

We’ve covered several types of quantum computing hardware on this show, some available, some still forthcoming. Neutral atom is a newer addition to the ranks of systems accessible on the cloud. How does it work? What kinds of real-world business use cases is it excelling at already? Join Host Konstantinos Karagiannis for a chat with Alex Keesling from QuEra about this new approach to high qubit counts. 

Guest: Alex Keesling from QuEra

K. Karagiannis:

We’ve covered several types of quantum computing hardware on this show — some available, some still forthcoming. Neutral-atom is a newer addition to the ranks of systems on the cloud. How does it work? What kinds of use cases is it excelling at already? Learn about this new approach to high qubit counts in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is the CEO of QuEra Computing, Alex Keesling. Welcome to the show.

 

Alex Keesling:

Thanks. Konstantinos. I’m very happy to be here.

 

K. Karagiannis:

Let’s start out with an easy one: Tell us how you found your way to quantum computing and to creating QuEra.

 

Alex Keesling:

Let me start with telling you what QuEra is. We’re a leading provider of quantum computing hardware, software and application development, and we are based in Boston, so that’s part of the story. I grew up in Mexico, and when I was in high school, it was the first time that I heard about the concepts behind quantum computing. It sounded very sci-fi, very intriguing, and I decided that this was something that I had to get involved with. So I applied for undergrads all over the place, and I ended up coming to the Boston area to do my undergrad at MIT in physics. And once I was there, I was on a mission to find quantum computers and start using them. And little did I know that back then, that was not something that one could do.

 

K. Karagiannis:

Two qubits in a lab, maybe.

 

Alex Keesling:

Exactly. That was about state of the art, so I started getting involved in research and, particularly, learning about these beautiful experiments that were happening using what we call neutral atoms to do all sorts of fun and exciting things. After that, I spent a little bit of time in Germany doing more research in understanding how to better use these complex setups that were being built around the world to study quantum matter and use what we call quantum simulators to solve problems in quantum mechanics using a controlled quantum system.

When I started my graduate studies at Harvard — this was with Professor Misha Lucan — and got involved with Marcus Griner, another professor at Harvard, and Bladden Vulitage from MIT, we started an entirely new project that progressed very quickly. We were able to go from an empty lab in 2015 to controlling 51 neutral-atom qubits by 2017. And the progress just kept coming. And we decided that the right thing was to take this outside of the lab and put it into people’s hands and continue supporting the development of the technology and matching it to applications, and that’s how QuEra was born. Since then, there’s been a lot of progress on going through this commercialisation effort, and we recently put our first device, Aquila, on the cloud. It’s been a journey, and a very exciting one.

 

K. Karagiannis:

And Aquila is impressive. But before we talk about it, I wanted to give our listeners a little background on what neutral-atom quantum computing is — how it differs from other types that they might be more familiar with.

 

Alex Keesling:

The biggest difference is, what are we using as our qubits? There are other companies, research groups, that are attempting all sorts of different approaches to how to build a quantum computer. For us, we decided that the technology that we’re most comfortable with and that we have the strongest belief in its ability to continue scaling and becoming more powerful is by using qubits that we take from nature. There’s no manufacturing behind the chip. What we use are individual atoms that we literally snatch in a vacuum and hold onto with laser beams.

This allows us to control many of these qubits all at once, and that allows us to continue building larger and larger systems. To give you a sense of this, I was talking about starting, in a research lab, with an empty space in 2015 that quickly moved to 51 by 2017. Now, we have systems working with 256 of these qubits, and we see this continuing to expand. We take these identical qubits to one another — neutral atoms — we isolate them inside a vacuum chamber, and, using all optical control lasers, we are able to create very large and very clean processors that we can reprogram every time that a user is running a circuit.

 

K. Karagiannis:

Can you give a high-level for people familiar with, let’s say, trapped-ion? How does this greatly differ?

 

Alex Keesling:

Many of the technologies are similar, but the difference is precisely in that neutral versus ion. We are using atoms that are, in a sense, complete — how you would find them, mostly in nature. And there’s a big advantage to this in that these neutral atoms, they don’t necessarily interact with one another. When you have ions, because they are charged particles, it’s like the classic experiment of rub a balloon in your hair, and you see how it gets all frizzy. It’s because things are constantly pushing against one another. With neutral atoms, you don’t get that, so wherever you want the atoms to be, that’s where they are, and they’re not fighting against this. Having this optical control and the ability to put them wherever we want allows us to have very large, very dense systems of neutral atoms.

 

K. Karagiannis:

That’s a great way to keep it simple. Ionisation obviously implies charge, so I wanted people to understand that difference. Now it all comes to life in a 256-qubit machine. Tell us a little bit about that machine.

 

Alex Keesling:

Yeah, this is our first machine. We’re very happy about having connected it to the cloud recently and made it available to customers. Aquila is the first and so far only neutral-atom quantum computer available to users everywhere on the cloud. It is a machine that allows users to very effectively use the quantum resources available to them by being able to program effectively the geometry of the chip.

For example, there are these FPGAs that are very efficient for classical computation. We’re doing something similar with what we call an FPQA, or field-programmable qubit array, mode. This gives users the ability to directly decide where atoms are going to be placed relative to one another. And then, in real time, we’ll take these instructions and Aquila will position all of our qubits in the places our customers are asking for. This is important is because if you think about things like optimisation of a graph problem, this has implications for logistics, for finance. You want to be able to use the few qubits that you have as effectively as possible, and by being able to move the qubits around and to map whatever problem, whatever graph, you have in mind, directly onto the geometry of those qubits, You can cut down a lot on the overhead.

This is the first unique feature of Aquila, and we’ve seen it get a very warm and positive reception by customers that have started accessing it on the cloud and exploring problems in very different verticals — both with commercial partners, from automotive to finance to logistics, but also with academics who are accessing it to further their own research and get access to resources that up to now have been locked away in just a few labs around the world.

 

K. Karagiannis:

You started answering this, but I was going to ask you if there are any aspects of the machine that make it better at certain types of use cases. We started with optimisation. How would this compare then to, let’s say, an annealer in terms of performance?

 

Alex Keesling:

It’s sometimes hard to make comparisons with other platforms, especially because there aren’t a lot of other available machines that we can use to benchmark out there at this scale. What I can tell you is, for example, for optimisation, we work together with our partners in the research world at Harvard, MIT and other places to look at, how can we use these machines to develop heuristic quantum algorithms that solve optimisation problems and, particularly, very hard optimisation problems? And what we found was that as we make the problems bigger and bigger and harder and harder, the scaling for the quantum device is better than the classical algorithm that we were using, which is what’s behind annealing — this algorithm is even known as simulated annealing. It’s using a digital classical computer to try to emulate how these processes happen in nature. That’s a very important point of comparison for us.

But there are other areas that we have been looking where beyond optimisation, there are two classes of problems that are interesting to discuss. One is, of course, there’s a lot of interest nowadays in machine learning. And we found a few particular ways of doing machine learning that we can do on this type of machine very directly. And we’re looking at developing heuristic ways of training these models and comparing them to classical alternatives. This is, for example, something where making a direct comparison with other quantum devices is a little hard because we don’t have devices to compare with, or at least not that are easily accessible on the cloud.

There’s been a lot of interest in seeing how these quantum machine learning applications can give us some advantage, especially in classification problems. And beyond that, we know that there are problems where before quantum computers were really a thing, there was this idea that some of the problems we’re trying to solve are predicting the behavior of natural systems — materials, chemicals and so on. And there was this idea of quantum simulation saying, “In reality, what we would like to do is to efficiently simulate nature.”

But we know that nature is quantum mechanical, so we need to build computers or systems that are quantum mechanical in nature. This was a key idea by a famous American physicist, Richard Feynman. And we see that for these kinds of applications, which is where a lot of our users and the research community are accessing the device, what we can do with these very large systems with hundreds of qubits is unique. And it’s already adding insights to how we think about these problems and how we think about designing new materials, for example.

 

K. Karagiannis:

With this system, you introduced a concept people don’t often hear — this idea of analog mode and digital mode. I want to spend a few minutes going into that, because it’s not something anyone’s coming across. The idea is, it’s in analog mode. Now it’s going to be in digital mode. Definitely, some clarifications need to be heard.

 

Alex Keesling:

In some ways, we’re following in the footsteps of classical computing, and it seems to be a cyclical thing. Digital computers are what we know about today. But back in the day, there was also the concept of doing analog computing. And even in electronics today, there are some things that can be done better with analog systems. When you look around, there’s now actually a renewed interest, for example, for machine learning and AI applications to use analog systems because they can have some built-in tolerance to errors and perturbations.

And what we’re doing with Aquila as of right now, the way to program it is not by breaking down an algorithm into very small components like single two-qubit gates, but by encoding the way that all the qubits are interacting with one another and then changing that over time. The way that users are programming Aquila is, it’s two phases: The first is defining the problem, and how are the qubits connected to one another, as I was explaining earlier. The second is, how do we make the qubits talk to one another to solve the problem, or even to do this in a chaotic way — for example, when we’re looking at some of this machine learning?

To do this, we use a real-time signal and an analog signal that controls the whole system. This is helpful for certain problems today, because it makes the performance better. So if you have, as we do today, only a finite number of qubits and a finite amount of time over which the quantum program can run, you want to use it as effectively as possible. And what we found is that for many of these problems in optimisation and machine learning, you’re better off encoding the problem directly in the connectivity of the qubits and then using this analog mode to effectively control the way in which imperfections can affect the outcome of your solutions.

This is only the beginning, and as you said, we’re also looking into digital. Digital is where the kind of operation that you see for most quantum computers nowadays, everything is broken down into single two-qubit gates. And it’s a very powerful level of abstraction that you can add. It turns out that with a neutral-atom platform and with a system like Aquila, you can use the same platform to run both kinds — analog and digital. So we’re working on enabling the tools for digital operation. And you can even mix and match and do a hybrid mode of operation.

In quantum computing, there’s a lot of talk about hybrid classical and quantum. Here, I’m talking hybrid quantum-quantum, using tools both from the analog side and the digital side. What we see here is an ability, for example, to prepare resource states using the analog evolution and then use the digital operations to extract much more out of them. This is going back to machine learning applications, for example, where we see expanding the use of it.

 

K. Karagiannis:

You know how when people pitch a movie, they say it’s something-meets-something — it’s a haunted-house story in outer space, and you get Alien. When I first heard about this, I was, like, “It’s D-Wave meets IonQ.” I don’t know how to wrap my head around what the whole process feels like, but it sounds like in a moment, when we talk about how to code this, it’ll make a little more sense. But before we leave the pure hardware realm, do you have any thoughts about scalability? Is there anything about this type of architecture that will help you achieve high qubit counts — like, going back to the D-Wave example, because it’s an annealer, it has, like, 6,000 qubits. It’s just the number that seems to be skewed, compared to gate-based. So, how does this progress? What does it look like going down the pike?

 

Alex Keesling:

Scalability is crucial. One of the big reasons we started this work is because of the inherent ability for this platform to continue scaling. As I was mentioning, we started with basically nothing a few years ago, and we were able to very quickly, with a small team, take this to tens and now hundreds of qubits. And one of the great advantages of using neutral atoms is that because we can pack them very tightly — because we can very efficiently use our controls — we start with a single laser beam that turns into thousands of split spots. We can take the devices like the one we have right now on the cloud and, with minor modifications, go from the 256 that we have right now to 1,000, 10,000, and we continue going up in numbers.

That’s the first part of the equation — can you put in more qubits together? — and that’s a nontrivial ask for a lot of the platforms. If we had to use cryogenic systems, we would have a much harder time doing this. The footprint would get very large very quickly, and we would have to think about interconnects and so on. But we can avoid all of that because of the properties of neutral atoms.

 The second part for scaling is, how do you control this? The analog mode allows us to have few control lines, effectively. But even in the digital operation mode, we’re working on cool tech that allows us to implement quantum logic gates in parallel across the system very efficiently, and do this without needing to have a lot of cables going in, because we do everything optically.

This is another area where the hardware has a big advantage. If we have to put in a cable for every qubit that we have to control, this very quickly gets out of control — pun intended. But by being able to address things in groups and implement gates using just lasers, it’s a lot easier to control these systems at scale. That’s where we’re going, and that’s going to be one of the things that is going to set our platform apart.

 

K. Karagiannis:

Is it fair to say, then, that with this control model of the parallel systems, is it like interconnect is built in, because you’ll be able to control so many? It’s just built into your architecture, in a sense, instead of coming up with some new connection mode?

 

Alex Keesling:

There are going to be a lot of new ways about thinking about the OS and the programming to move the quantum information much more efficiently within the processor without having to incur very costly, for example, swap gates that are necessary in a lot of the fixed architectures. You can think about having different zones for this kind of processor and having, in some ways, like, intraconnects. And by the time you need to think about interconnects between different devices, you could be already at the 100,000, million, physical qubit scale.

 

K. Karagiannis:

At that point, why even bother? That’s pretty impressive. Any thoughts on error correction, and what that would look like in the future? It might be early —

 

Alex Keesling:

It’s never too early to think about error correction. We take this very seriously, and we’re looking at, how do we enable error correction in a way that is using the hardware to its best ability? There are a lot of cool ideas that we’re developing for using the ability to reprogram effectively the system and the geometry in real time to implement error correction in a more efficient way.

I’d like to also call out some very cool results that came out as a preprint — not yet published — from work done primarily at Harvard, but even with some people from QuEra on demonstrating very high performance in entangling gates, getting past some of the thresholds necessary to take error correction seriously. We expect that there’s more to come in the next few years, where you’re going to see quantum error correction become a reality not just as a onetime demonstration, but also with an approach that can scale to these very large numbers of physical qubits.

 

K. Karagiannis:

And for now, the way to interact with this machine ­­­­— there are a couple, so I’d like to talk about them. First, there’s Bloqade.

 

Alex Keesling:

Bloqade is a wonderful open source package that we created. It allows users to understand how to program Aquila and to then use it to program Aquila directly. It is a numerical emulator of quantum systems, like our processor. It’s a great way for users to start getting acquainted with how to write algorithms, to test their algorithms with the kinds of sizes that a classical computer can still handle, and then transfer those developments directly to the hardware and test it on the hardware directly.

 

K. Karagiannis:

You’re able to select Aquila as a back-end target.

 

Alex Keesling:

Yeah — it’s one of the programming modes.

 

K. Karagiannis:

So people can visualise until they go play with it themselves. How would this compare to, let’s say, interfaces they’ve seen in the past — either, for example, like Composer for IBM or something — how does it look different?

 

Alex Keesling:

The interface feels, in many ways, similar to what you would use with other providers. We’re trying to make it as user-friendly as possible. The thing that’s different is the way of programming itself. The way you send instructions, as we were talking about earlier, is different, because what you’re going to be doing with Bloqade is, first, define, what is the geometry of the processor that you want? That’s going to be the first set of instructions you want — this is a set of coordinates for where, eventually, the atoms are going to be in some plane floating inside a vacuum chamber. And the second part is, how are the few control knobs going to change over time?

And that is how you are programming your algorithm. It’s these two conceptually simple pieces of code you need to put together, and then you can see the outcomes and you can build your own analysis pipelines, and start integrating this into some hybrid with now quantum classical optimiser.

 

K. Karagiannis:

And will the software be changing for digital mode? Will there be new features being added?

 

Alex Keesling:

We’re going to be expanding the tools we have available through Bloqade to incorporate new ways of programming. And as we add more and more controls and we enable them to users, we’re going to have Bloqade track the development of the hardware, and in some ways even lead the development of the hardware, so users can get ready to take the best advantage of the tools we’re going to be providing.

 

K. Karagiannis:

Are there other ways to submit tasks to Aquila? I know it’s available on Braket.

 

Alex Keesling:

That’s right. This is a partnership we’re incredibly happy with. We connected Aquila to Amazon Braket in November of last year, and since then, we’ve seen very heavy usage. We’re seeing lots of customers come through and engage with the device. There’s the Braket API users can access and they can use to write their own algorithms and submit their jobs.

It’s been great to see how quickly people have adopted it and how far-ranging the set of users that we’ve seen come through has been. As I was saying, this goes everywhere — the automotive industry, the finance industry, logistics. We’re seeing academic customers engage with it. And one of the great things is seeing how all these sets of users are using the device. And when they reach out to us and we set out partnerships or we support their developments, it also helps us see, where in the near term can we have the most value with the hardware, and how should we redirect our tech development efforts to support more and more customers?

 

K. Karagiannis:

You bring up a good point about near-term value there.

Before we close, I want to get a sense — this year, the economy is a little wonky. Some customers are trying to figure out how they can get the best bang for their buck when experimenting with quantum. And some of them choose quantum-inspired instead. What do you see as a way to return real value — get some ROI — on experimentation? Are there particular use cases you think will show promise sooner than later on your particular hardware?

 

Alex Keesling:

Our users, for the most part, understand that quantum is an incredibly promising technology — that it’s a question of when it gets there, not if — and that what we’re forming are long-term partnerships that allow us to shape the capabilities to enable users’ applications much more. There is an understanding that more is yet to come, and that the quantum computing space is evolving rapidly, and that capabilities are advancing rapidly. But in the near term, what we’re seeing is users excited about testing, for example, machine learning, where, because of the larger expressivity of the quantum systems and quantum processors, we expect to see some interesting results with quantum machine learning.

At the same time, it’s important to recognise that the best code is written by trial and error. And what users are asking us for is a different way to approach quantum that is hands-on, that is getting access to large processors so they can write their own algorithms, they can test them, they can see what works, what doesn’t.

And in the areas I was mentioning — in machine learning and optimisation and simulation of other systems — this current technology, this analog operation and the expanded capabilities for a hybrid analog/digital are exactly what we’re hearing from our customers they want. And they want to see not just, what is the performance now, but by being able to access a system with hundreds of qubits, they can see the progression of, how well does my algorithm work in small problems — 10 qubits or so? How does it work with 50? How does it work with 100, with 200? That allows them also to extrapolate and work with us to tweak things so we can bring the best possible capabilities to them as quickly as possible.

 

K. Karagiannis:

That’s a great point. Extrapolation is a lot easier when you have a lot of points to map and draw that line through.

It definitely is a matter of when, not if. I’m looking forward to seeing more from this machine, and I hope listeners will visit Braket and play with Bloqade and see if they can write their own algorithms on here too. Thanks a lot for joining — I enjoyed this.

 

Alex Keesling:

Thank you, Konstantinos. This was great.

 

K. Karagiannis:

Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap: QuEra uses a different approach to quantum computing: building systems based on neutral-atom technology. The company has a 256-qubit machine, Aquila, publicly accessible on Amazon Braket. You can also pay for direct access with higher availability.

The system uses a field-programmable qubit array, or FPQA, which allows the qubits to be mapped to problems in the analog mode the system runs in.

Since recording the interview portion of this episode, I was able to visit the company’s headquarters and see both Aquila running and future systems being built. I could see the qubits represented as laser dots on a display. a great example of how Aquila works was an optimisation problem with my own home borough of Manhattan. How would you optimise the location of a shop by considering other sites on the island? A map was created with nodes representing locations. The nodes were then transferred to the atoms, and you could see glowing dots representing points on the familiar shape of Manhattan, almost like atomic ASCII art. The excited atoms could then find and reveal the answer points. I hope that mental image helps you get a sense of how Aquila can work.

The system is being used for other types of optimisation, along with ML tasks like classification, regression and prediction. It’s also used for simulation tasks like working with exotic states of matter. QuEra will offer a digital mode in the future where the qubits will be used in gate operations.

In addition to using Braket to access Aquila, you can experiment with this type of algorithm coding via the Bloqade test-bed software.

That does it for this episode. Thanks to Alex Keesling for joining to discuss QuEra’s quantum computer, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow @ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...