Transcript | Solving Energy Distribution Challenges— with Atom Computing

We can easily extrapolate that quantum computing will excel at large optimisations that challenge classical systems. One such mammoth problem is power-grid energy distribution. How has one neutral atom system started to tackle this already? Join Host Konstantinos Karagiannis for a chat with Rob Hays from Atom Computing, where they discuss a partnership with the U.S. Department of Energy.

Guest: Robert Hays, Atom Computing

Konstantinos Karagiannis:

Quantum computing will excel at large optimisations that challenge classical systems. One such mammoth problem is power grid energy distribution. How has one neutral-atom system started to tackle this already? Find out in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is the CEO of Atom Computing, Rob Hayes. Welcome to the show.

 

Rob Hayes:

Thanks for having me.

 

Konstantinos Karagiannis:

Tell us about how you found your way to quantum computing. I know there’s definitely some chip history in your past.

 

Rob Hayes:

My background in computer engineering is a little bit different than a lot of the founders and early pioneers in quantum computing. I got my degree at Georgia Tech and went to Intel and worked there for many years doing microprocessor product line management. Eventually, I was the executive in charge of the data center silicon portfolio for Intel’s data center business and then basically leading the roadmaps for the server processors.

I had the benefit of working with a lot of the large cloud service providers around the world, and server OEMs, as well as high-performance compute centers, and saw the world of computing from the classical computing side. It was clear that over the last 10 years or so, with the advent of GPUs and AI accelerators and other things like that, the world had moved to a hybrid compute–type architecture. Quantum computing was on the horizon and seemed like a clear next wave of computing that could fit into that existing hybrid compute cloud infrastructure that’s already been built out.

After Intel, I went to Lenovo as chief strategy officer for their data center business for a few years and got involved with Atom Computing as part of Lenovo. I joined the board of directors of Atom Computing, and then, a few months later, I was invited to join full time as CEO, and I jumped at the chance to do that. That’s how I ended up at Atom. Luckily for us, we’ve got a team of folks who are world-class quantum physicists and engineers and know how to build quantum systems. I know how to run a computing business and plug those into the infrastructure that exists. My skills are complementary, but quantum is something I’m learning on a daily basis. It’s been a lot of fun.

 

Konstantinos Karagiannis:

Of course, the whole classical world isn’t going anywhere. The two are going to work hand in hand forever, pretty much.

Tell us a little bit about Atom Computing — what you guys do there, what your goals are.

 

Rob Hayes:

Atom Computing is a quantum computing hardware platform company. We’ve been around for over five years now. We were founded in Berkeley, California. We’ve since expanded our operations into Boulder, Colorado. The team is split about 50/50 between those two locations. We’re building our quantum computers out of optically trapped neutral atoms, what a lot of people consider a newer modality. The technology has been around for a couple of decades in academia, and we and a few other companies have started to commercialise that in the last few years in various applications.

We’re focused on universal gate-based quantum computers. We chose neutral atoms not only because the team has expertise in this technology but also because we think it’s quite scalable relative to some of the earlier modalities with superconductors and conductors and trapped ions. What we’re trying to do as a company is just simply take that technology, scale it up, make high-quality qubits with coherence, fidelity gate speed, all the performance metrics you expect to see in a competitive quantum computer. And that’s what gets us up in the morning —driving toward that mission every day.

 

Konstantinos Karagiannis:

We’ll touch on scalability. But first, can you explain for our listeners the difference between neutral atom and trapped ion? They might sound similar to people who aren’t used to it.

 

Rob Hayes:

An ion is an atom. It’s just a charged atom. They’re close cousins. What’s different is how you capture the atom or the ion — how you trap it, what you do when you manipulate it, and the energy levels and the spins and things like that are fairly similar. But the biggest difference is, in a trapped-ion approach, you’re actually building a trap. It’s like a chip that has electromagnetic fields that have the ion suspended in free space between these electromagnetic waves.

In a neutral-atom system, you’re trapping the atoms. They’re not charged. They’re neutral atoms, meaning they’ve got the same closed outer valence. They’re trapped with optical tweezers — basically, a wavelength of light. They’re also in free space, but we’re using light instead of electromagnetic waves, and we’re doing this in a vacuum chamber in free space. There is no chip. There’s no trap device.

That creates another key difference in that with ion traps, there’s a finite amount of ions you can load into each trap, and then once you hit the limit of how many ions you can fit in that trap, you have to connect multiple traps together with some kind of a bridge or an interconnect — typically a photonics-type interconnect. Neutral-atom systems have the potential to scale up to hundreds of thousands and even millions of atoms or qubits in one single vacuum cell. The need for these interconnects and bridges between modules is much further out in time for neutral-atom systems than it is for trapped ions, and that’s what gives it part of this scaling potential.

 

Konstantinos Karagiannis:

It’s like a three-dimensional array you’re able to create in there.

 

Rob Hayes:

You can create two-dimensional or three-dimensional arrays. Today, our systems are two-dimensional, but we’re already working on prototypes for three dimensions.

 

Konstantinos Karagiannis:

Would you say the optical tweezer inherently leads to a higher-fidelity qubit, with less noise?

 

Rob Hayes:

I don’t know that the tweezer is necessarily what’s going to lead to higher fidelities. You need to get noise out of the system. Neutral atoms are good because actually they are relatively immune to outside noise because they are neutral atoms — they’re not charged particles. But there’s a lot of engineering work that goes into getting fidelities higher because there are lots of sources of noise. You want to get all of those out of the system as much as you can and have very high precision control of the light going into the platform — precise amplitude, frequency, phase, and all that. It’s all about precise control of light in order to get fidelities up. That also helps you scale up to larger numbers of qubits as well. The more precision you have, the more control and the more opportunity you have to scale up with high fidelities.

 

Konstantinos Karagiannis:

The reason I wanted to have you guys come on now is some news you’re making recently. There’s a collaboration with the U.S. Department of Energy’s National Renewable Energy Laboratory, or NREL. Can you tell us about this collaboration and how it got started and what it entails?

 

Rob Hayes:

It’s an exciting collaboration for us for a number of reasons. NREL, has a high-performance compute platform they call ARIES where they’ve created a digital twin of the U.S. energy grid, and they’re able to use this digital twin to simulate scenarios or choices that could be made in optimising the grid. They can bring on new sources of energy. They can bring on new demands or loads of energy. They can bring in new pathways and lines and things like that in this digital twin. They can test out scenarios and see how the grid might respond to those scenarios or choices that get made.

With NREL, we’ve integrated our quantum computing technology stack with this ARIES platform. They, for the first time, have this hybrid HPC ARIES platform with quantum offload capabilities so they can make calls to our quantum system and explore computations that just wouldn’t be tractable in the current platform. The other thing that is exciting about it is, this is a good example of a public-private partnership where we’re bringing expertise to the table in quantum computing and how to programme them. They’re bringing expertise to the table in how to manage energy grids and the complexity that that entails. By bringing both teams together, we’re able to collaborate and try to move the ball forward on solving very tough, complex problems that just weren’t solved before. It’s a good model.

The other thing that’s exciting is, network-optimisation problems in general, which this energy grid use case is an example of, are a good use case for early adopters of quantum computing. And there are lots of network optimisation problems across lots of different sectors in the government and industry that could be able to take advantage of some of the similar results and algorithms — telecommunications networks, transportation networks, supply and logistics networks. There’s networks across many sectors, and a lot of the learnings we get out of this energy-grid simulation and modeling could apply to other sectors as well. For all those reasons, we’re pretty excited about this partnership we have with NREL.

 

Konstantinos Karagiannis:

Some of our listeners might not know what a digital twin is. What’s the scale of a typical digital twin. You hear about toy problems in quantum computing a lot in use cases. Could you frame that for the listeners?

 

Rob Hayes:

In general, a digital twin is a simulated copy in the virtual realm of some physical system. In the case of NREL, they’ve basically taken the energy-grid network, which has lines and nodes and switches and transformers and all these things, and they’ve basically put that into a simulated environment on a high-performance compute cluster. Other examples would be like if you’re Lockheed Martin or Boeing and you’re building an airplane, you would build a digital twin of the airplane you could use it to simulate how it might fly in different scenarios or even for maintenance or other types of records so you could know how it might be aging over time or respond to upgrades and things like that. Digital twins have been around for a while.

The scale of the one NREL has is the scale of the U.S. energy grid. That’s their mission: to make sure they’re understanding how to best optimise energy production and transmission in the United States from a research perspective — they’re not the operators — and then take those learnings and that research and transfer that to the power companies and the power-grid operators so they can better improve the way they manage their infrastructure, upgrade the infrastructure, produce energy and those kinds of things. The scale of ARIES is trying to simulate the overall U.S. energy grid and looking at different aspects of that.

What they do on the quantum platform will evolve over time. It’ll start probably small, asking some straightforward questions around, like, power-line failure and how they might reroute power in a more optimal way and things like that. But they aspire to ask bigger and harder questions over the long haul as these quantum systems become more capable and they learn how to programme them and get results out of them.

 

Konstantinos Karagiannis:

Folks can visualise that it does have the number of parameters to simulate the actual grid, let’s say, but you might be zeroing in on some aspect of it as part of an experiment.

 

Rob Hayes:

That’s right.

 

Konstantinos Karagiannis:

That makes sense. This is an interesting use case because you want to try and be able to extrapolate to other similar types of use cases.

 

Rob Hayes:

Exactly. If we can optimise how energy flows through a grid, why would we not be able to optimise how bits flow through the internet or telecommunications networks, or how cars flow through roads and things like that?

 

Konstantinos Karagiannis:

What would make this collaboration unique? Has anything like this ever been done before?

 

Catherine Lefebvre :

A couple of things make it unique. To our understanding, it’s the first time anyone in at least this use case in energy has done an integration between an HPC system and a quantum system. From a technical hardware-platform and software-API perspective, we’ve blazed some new ground there. We also think it’s a good public-private partnership.

Certainly, this is not the first public-private partnership in the world, but we’re early days in quantum computing, and there are not a lot of examples of national labs doing research projects at the application level like this with quantum computing companies. We’re early days on that from the industry being in its infancy, but we think it’s a good model. We’d like to replicate that in other areas over time, and hopefully, we’ll have a lot of success in this one, and that success will inspire the next wave of programmes.

 

Konstantinos Karagiannis:

TIs there anything you could tell us about what this looks like, this collaboration, like what type of hardware you’re using on your end to enact some of these simulations?

 

Rob Hayes:

The group we’re working with at NREL is in their Flatirons campus, which is just down the road from our Boulder, office. We happen to be neighbors, which is great. That allows for easy collaboration when we’re physically close to each other. The integration we have is, first level, at the software level. Our APIs and emulators and things like that have been integrated with their platform. That will give them access to our hardware platforms. The platform we have today is a prototype platform in California, and then we will, when they become available, give access to our next-generation systems we’re building in Boulder, as well. But that’s a future statement.

 

Konstantinos Karagiannis:

For folks who are used to talking in qubits, how many neutral-atom qubits would be used in something like this?

 

Rob Hayes:

That’s a great question for NREL. How many do they need for their use cases? More is always better, and the more you have, the more complex a system you can build. The current platform we have is 100 qubits. The next generations are quite a bit larger than that. We haven’t announced publicly yet the specifications for those systems. We will be doing that in the coming weeks and months.

But long-term, everyone’s got their eye on fault-tolerant quantum computing. As your listeners are aware, we are in the NISQ era — the noisy intermediate-scale quantum computing era — and optimisation problems like this are good candidates to explore in that NISQ era. But ultimately, we want to get to large-scale fault-tolerant quantum computing. That’s the long-term mission of our company. That’s one of the reasons partners like NREL, but also others, are very interested in working with us because they can see our roadmap, they can see the progress we’re making and how fast we’re able to scale up these qubits. And that gets people very excited about working with Atom Computing. That’s the plan moving forward.

 

Konstantinos Karagiannis:

Would you say running this kind of experimentation has a bidirectional flow of information when it comes to improvements and things? Are there benefits they’re getting, benefits you’re getting, from doing something like this?

 

Rob Hayes:

Absolutely. I said one of the reasons a public-private partnership like this makes a lot of sense is that we bring our expertise, they bring their expertise and together we can do things we couldn’t do separately. The learning we get from that is just as important, because we learn from working with customers and partners like NREL: How do they use the system? How do they programme it? What results do they get? Where do they see performance bottlenecks or performance gains relative to competitive platforms?

That learning helps us improve our products and make different design choices as we move forward because just like any product company, you want to make the best products that delight customers as you can. So getting that learning from them is invaluable to us. And the learning they get from working with our quantum applications experts and software teams gives them a jump-start in figuring out how they can take advantage of quantum computing to solve some of these complex network problems they like to go solve. The bilateral learning is an important part of this.

 

Konstantinos Karagiannis:

You see a potential for improvements all up and down the stack as a result?

 

Rob Hayes:

That’s the goal. Most of our customer collaborations have been private collaborations like this, and the benefit we get is very much what I just described — learning, making improvements up and down the stack. At some point in the near future, we’ll be opening up our platforms for more general purpose like public access, and by having done these private data partnerships, we should be able to offer a much better product and service than we otherwise would have been able to go do if we had not gotten that learning and made those improvements.

 

Konstantinos Karagiannis:

When you’re trying to go more public, are you envisioning a cloud interface where people come straight to you or one of the providers?

 

Rob Hayes:

That’s how we’ve seen most not just quantum computing but also computing over the last decade migrate from on-premise, proprietary infrastructure more toward public cloud services where people can spin up and spin down instances on demand, pay for what they need as they go. We’ll just fit right into that existing infrastructure and business model. We’ll be able to offer our platforms directly to end customers who want to engage with us in a direct relationship. We’ll also be announcing partners in the near future — major name-brand cloud service providers that will have our systems available through them as well.

 

Konstantinos Karagiannis:

Do you have emulators that run that you envision people learning on first before they actually touch hardware?

 

Rob Hayes:

We do. That’s where we got started with our customers early on — an emulator, so they can take an algorithm. They can write the code or just take code they’ve already written for someone else’s platform. They can compile it to our system. They can run it on emulator, and it can tell them, is it compiling correctly? Have you written the algorithm in a way that it’ll fit in our system or the specs of our system? Are there optimisations you could do to improve the way you’ve written the circuit? That’s the benefit of the emulator — it gives you some hints as to how you can write a better programme that will run best on the hardware.

 

Konstantinos Karagiannis:

With a project like you have going right now, I’d imagine you’re getting a lot of real-world torture testing — real results.

 

Rob Hayes:

The objective is to get some real-world testing, and things don’t always have to go well to get learning out of it. You learn from the failures as well.

 

Konstantinos Karagiannis:

What other collaborations does Atom Computing have going on right now?

 

Rob Hayes:

We’re not announcing any other collaborations today, but there is a mix of collaborations going on with government entities in the U.S. and around the world as well as enterprises. What we see today is governments leaning in through the Department of Energy, the Department of Defense. There are a number of very capable teams that have been spun up that are either already producing or exploring quantum applications or getting started to go do that. A lot of those entities have been very eager to work with us, and we’ve been eager to work with them, so that’s a good source of collaborations.

We’ve announced a DARPA collaboration we have in the Underexplored Systems for Utility-Scale Quantum Computing (US2QC) programme, so that’s progressing well. But we also see a number of enterprise customers who are wanting to be early adopters of quantum computing and get an early-mover advantage or first-mover advantage in that space.

These are banks and automotive companies and transportation and aerospace companies — and large, name-brand companies — that typically, in their CTO office, have one, two, three, four people on a team who are QIS professionals looking at algorithms and applications that are pertinent to the mission of whoever their company is that they’re working for. And they’re already using time on a variety of quantum platforms that are available publicly today and starting to figure out, how can they get benefit, when can they get benefit, how big do the systems need to be? They’re providing feedback and requirements to us and our competitors to help us make sure we’re hitting the right design targets and that kind of stuff.

We’re encouraged by the mix of enterprise and government customers that are out there. We’re definitely supply-limited and not demand-limited right now in quantum computing. There are a lot of people that want to programme quantum computers. They want bigger ones and more capable ones, and we and others are out on a mission to try to make sure that happens.

 

Konstantinos Karagiannis:

You hinted earlier that while you have 100 qubits now, you’re not quite ready to publicly state what the next system will have. But could you give some sense of what a roadmap looks like for the months or years ahead?

 

Rob Hayes:

If we want to get to fault-tolerant computing in a time frame that’s reasonable and that’s going to keep people’s attention, then we need to be scaling by more than a few dozen qubits each generation. If we scaled by a few dozen qubits or it was even just a linear scaling, it would take us 20 years or something like that. I don’t think that’s what people have in mind. We would love to see us get to fault-tolerant quantum computing within this decade, before 2030. We think that’s definitely feasible. But if you do the math, you’re going to have to add hundreds or thousands or tens of thousands of qubits each generation.

Our roadmap looks a little bit more like exponential scaling than linear scaling, so we’re trying to make sure that we’re adding a substantial x factor of number of qubits each generation. With the architecture that we’ve chosen with the neutral atoms and having these things in a single module within a vacuum cell controlled by pulses of light which can be easily divided using optical devices, we think we have the technical roadmap to get there. We have lot of hard engineering work ahead of us to make it happen and improve it. But the science risk has largely been retired over the last couple of decades, and now it’s just the hard engineering work on the software and the hardware to make that happen.

That’s what people should expect from us — something more like an exponential increase every generation. We’ll see what our pace is between generations. We have our objectives, but until we do, we won’t know that we are on pace for what we want to hit.

 

Konstantinos Karagiannis:

As with anything else, if you’re cramming more qubits into one space, all the control circuitry goes up in one form or another.

 

Rob Hayes:

Unlike superconductors or something where you have one RF channel per qubit, we’re able to dress our qubits in parallel. So we can dress rows or columns of qubits in parallel, and over time we might be able to do full planes at a time and things like that. The RF channels we need are tied more to the parallelism we get out of the platform. We’re talking dozens of RF channels for a system, not hundreds or thousands or tens of thousands.

 

Konstantinos Karagiannis:

That definitely helps with scaling. And what would you see as an obstacle to reaching fault tolerance? Is it error correction?

 

Rob Hayes:

There are a lot of things. First, you need a lot of physical qubits because there’s this yield between the physical qubits that get mapped down to logical qubits, and every architecture will be a little different — is it 100:1 or 1,000:1?

 

Konstantinos Karagiannis:

That’s what I was hoping to get an opinion on.

 

Rob Hayes:

You need a lot of physical qubits. You need mid-circuit measurement in order to be able to take readings as you’re progressing through a circuit and detect errors and retry or redo things or correct things as you go. If you do detect an error, you’re going to need the error-correction algorithm itself. What do you do if you do detect an error or potential error? You’re going to need long coherence times so that as you’re progressing through a deep circuit, and you’re performing error correction, which is a feedback loop to a classical system, you’re going to need to be able to hold that quantum information for a very long time.

You’re going to want high fidelity so you’re not getting a lot of errors in the first place. You’re going to want great gate speeds and readout speeds so that as you’re doing large, deep circuits, it gets performed in a reasonable amount of time. You want parallelism for the same reason so you get more done in one time step.

There are lots of elements of performance that will go into fault-tolerant computing, and we’ll have to not just check the boxes on all of them but also improve all of them as we go every generation to make sure that we’re continuing to deliver the performance out of the systems that customers are expecting — not just accurate results, but accurate results in a small amount of time.

 

Konstantinos Karagiannis:

That makes sense. And of course, these are the challenges everyone’s facing. I was hoping to hear if there’s any particular edge that neutral-atom has for any of those areas in particular.

 

Rob Hayes:

We’ve demonstrated record coherence time — tens of seconds on every qubit. That’s different than other modalities. That’s a real distinguishing characteristic for neutral atoms. At tens of seconds coherence, it’s just so ridiculously long, it’s a don’t-care. Other modalities are trying to improve that and catch up on that, but we’ve already got that one. I can tell you our second-generation systems are even better. Scaling up to number of qubits, we’ll have to show the world we can do it, but I can tell you we’re doing it and we’re excited about that. That’s an advantage — the scalability.

Fidelity is an area where the neutral-atom community is catching up because we haven’t been at it as long. Superconductors and trapped ions have developed pretty good qubits from a fidelity perspective. We’re playing a little bit of catch up there, but I don’t think there’s any architectural disadvantage or anything other than just cleaning up the noise and making sure these systems are high-quality and reliable in order to go do that. That’s just work that needs to be done

Error-correction algorithms, mid-circuit measurement, those kinds of things, everyone’s on equal footing. We’ll all have a different way we go about it, given the architecture and the sources of the noise, but I’m not sure anyone has an advantage or disadvantage there. That’s just work ahead of us that needs to be done.

 

Konstantinos Karagiannis:

So far, it sounds like the next-generation machine is going to be quite an announcement.

 

Rob Hayes:

We’re excited about it. It’s not too far off on the horizon now, so we’ll definitely give you a call back and see if we can update your audience once we are ready to do that.

 

Konstantinos Karagiannis:

You all heard it here first: There’s something coming.

Rob, thanks so much. I appreciate your coming by to tell us about all this.

 

Rob Hayes:

It was a pleasure. Thanks for having me.

 

Konstantinos Karagiannis:

Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. Neutral-atom quantum computers differ from trapped-ion systems in that the former have no charge and are trapped with optical tweezers in free space in a vacuum. Without a trap device as used in trapped-ion systems, neutral-atom quantum computers should be able to fit many qubits in one system. How many? Maybe millions. And they’re relatively noise-free qubits too.

The Atom Computing system available now has 100 qubits, with a next-generation device on the horizon. Atom Computing has been working on a collaboration with the U.S. Department of Energy’s National Renewable Energy Laboratory, or NREL. The lab has a digital twin of the U.S. energy grid, which simulates nodes, switches, transformers and so on using an HPC cluster. Atom Computing has added its quantum system to the mix. The hope is to come up with ways to optimise the grid and energy delivery. It’s a complex problem, well suited to quantum computing, and the learnings from the projects should be applicable to other types of networks such as telecom, transportation, supply and logistics.

The problems solved with the digital twin are still only subsets of a large beast, but the hope is to expand the types of solutions as more qubits become available. Atom Computing also hopes to provide access to its systems via the cloud soon so you’ll be able to take a crack at them without wearing an NREL lab coat.

That does it for this episode. Thanks to Rob Hayes for joining to discuss Atom Computing, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...