Classical computing cannot simulate more than about 50 qubits. What does it mean that we now have a quantum computer with, gasp, 100 qubits? ColdQuanta found a way to beat giants like IBM to this amazing feat, and they did it with a new approach that may lead to smaller quantum computing systems that could be rack-mountable one day. Like a reverse microwave, the new Hilbert computer uses lasers to slow down particle vibrations to make them “cold” and able to act as qubits. Hilbert even touts low errors and high connectivity.
Guest Speaker: Paul Lipman, President of Quantum Computing at ColdQuanta
The Post-Quantum World on Apple Podcasts.
Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.
Classical computers struggle to simulate more than 50 qubits. This means that a quantum computer with more qubits than that could potentially bring us quantum advantage. How about a machine with twice as many qubits?
A hundred qubits are here, thanks to ColdQuanta. Find out more about their amazing computer and how soon you can get a crack at running applications on it in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you will join each episode as we explore the technology and business impacts of this post-quantum era.
Our guest today is the president of quantum computing at ColdQuanta, and this company made some splashes in the news recently, so I wanted to have him to come on. Welcome, Paul Lipman.
It’s great to be here.
ColdQuanta is a quantum technology company. We’ve been in business since 2007. The company was actually founded out of work done at CU Boulder, where our founder was part of the team that created the world’s first Bose-Einstein condensate. So, we’ve been developing and manufacturing quantum technology since then. That applies both to quantum supply chain and quantum research, and now we have a quantum computing business, and that, I think, is the latest news that you were referring to in your opening comments.
It seemed like two companies appeared — not out of nowhere, obviously; you’ve been around for a long time — but all of a sudden, it was two companies claiming 100 qubits — Adam and you guys. Do you want to talk about how your approach to 100 is a little different than theirs?
We are planning to release our first quantum computer — named Hilbert, after the famous David Hilbert — at the end of this year, and that will be a 100-qubit cold atom-based, gate-based quantum computer. We’ll make that available initially through our own cloud platform, and then, later on in 2022, through some of the public cloud platforms as well, and seeing a lot of excitement and engagement from potential customers and partners around that platform.
I think one of the interesting aspects of cold atom as a modality is, in some ways, it’s the new kid on the block. There’s been a lot of activity in the superconducting space and the trapped-ion space, and cold atom is the latest one to take the stage, but for a whole variety of reasons that we can get into, it is the most promising modality to truly, over time, achieve large-scale, error-corrected full tolerance, so that’s why you’re seeing so much interest and activity in the area.
As to the specific differences between our approach and atom computing’s approach, we’re using different atoms — I think that’s at the core of the difference — but ultimately, you’ll have to wait for those devices to come online and run circuits and prove themselves in the market to really dig into the specific differences.
Yes. They’re working with the optical-tweezer approach, which is starting to appear in certain types of systems. I love the name of your machine, Hilbert. That just comes up all the time in physics, and using “Hilbert’s space,” of course, to describe that giant space where other-dimensional things can happen. It’s a pretty great, awe-inspiring name, but hopefully, your machine is actually really here in this world and not lost somewhere in Hilbert’s space. It will do a lot better.
That’s great, yes.
Many a joke about infinite amount of space.
Yes. I don’t want to get too lost in the physics jokes, but, with your approach, if I understand, it involves using lasers to slow down the atoms to make them cold — basically to get below five kelvin.
Yes, that’s right. It’s actually to get below five microkelvin — five millionths. We’re six orders of magnitude colder than the liquid helium-based approaches. One of the real kinds of core areas of expertise in ColdQuanta is this manufacturer of ultra-high-vacuum cells. We’re one of the leaders in the world at doing that. We provide that to other quantum companies, to research labs, around the globe, and the core of Hilbert is one of these cells. You could literally fit it in the palm of your hand, and then we use various techniques for putting cesium atoms in that cell and then a grid of lasers that trap the atoms in a 2D array. You’ll have a 10 x 10 array of a 100 qubits. You could have a 20 x 20 array, or 400 qubits, and so on and so forth. It’s the lasers that are trapping the atoms in place and reducing their momentum, and thus reducing their temperature.
It’s sort of like a reverse microwave, right? You’re saying, instead of exciting particles to make them hotter, you’re just slowing them down and chilling them quickly. I assume that means a lot less machinery involved. This machine might not end up being in this giant, cold environment — the IBM machines, their giant cylinders, the chandeliers inside of it, etc. Are you imagining this is being something that could fit in a rack or something one day?
Yes, that’s exactly right. I think one of the terrific advantages of the cold-atom method is — and we can talk about the advantages around scalability and connectivity — is the fact that we don’t need dilution refrigerators. Interestingly, Google shared, in their recent quantum symposium, this graphic of a million-qubit device, and it was essentially the size of a basketball court, and all of which has to be frozen to 3 kelvin, 4 kelvin using techniques that don’t even exist today, whereas, for the cold-atom approach in our array, we could scale to hundreds of thousands — potentially even millions — of qubits in something that would fit in the size of your fingernail. A single glass cell that would fit in your hand, and inside, you could scale it up to hundreds of thousands or millions of qubits.
Ultimately, the goal in the longer term is to reduce the form factor down to the size where it would fit into a couple of 19-inch rack-mountable units. So, if you think about that, that opens up a terrific array of potential use cases. Think of a quantum computer at the edge of a network, widely distributed, or quantum computers on satellite as part of a quantum communication network or on airplanes or on vehicles. The ability to actually shrink these things down and deliver very high-scale quantum computers, very high qubit counts, but do that effectively in a highly portable and low form factor ,would open the industry’s eyes in the use cases which are probably not even in the consideration set today.
You mentioned Google, and they were also talking recently about that ratio of physical to logical, and their latest guess — and it has to be a guess, because Sycamore isn’t that big, but they’re guessing about 1,000 physical to get one good-quality logical. Do you guys have any way of even guessing how many physical you would need to get an error-free or very low-error logical qubit?
One of the other advantages here of cold atom, the way that we do entanglement with neutral atoms is through highly excited Rydberg states. Without getting into the detailed physics behind it, what it essentially enables us to do is to have a high degree of entanglement at quite large separations within the array. In our array, the atoms are separated by on the order of three microns, and the Rydberg interactions can happen over much broader distances. We’ll eventually be able to get to 50:1, 60:1 — potentially, even greater connectivity than that. Then, if you think about error correction, error correction is a function of the configuration and the architecture of the qubit configuration and a function of this degree of connectivity.
We believe that the ratio of logical to physical qubits in our cold-atom system could conceivably be an order of magnitude lower than that being posited for superconducting. So, to get to that 1,000 logical qubits, to start to do really useful work, may conceivably only need 100,000 qubits, and again, we can do that in an extremely small form factor with extremely high stability.
Yes, that’s incredibly exciting. Do you have an approximate timeline for, let’s say, that 100,000th down ratio time?
Yes. From a time frame perspective, I’ll say we are laser-focused right now on the launch of Hilbert, which will be toward the end of this year. If you start to look out from the time frame perspective, in the five- to 10-year time frame really is when the whole industry is talking about reaching that 1,000-logical-qubit milestone and the quantum advantage milestone, and we’ll be right in that same time frame.
You think you’re on track with, let’s say, what Google is projecting —it’s just that you’re going to need way fewer —
Yes. I think it’s a combination of what has to happen to get there. If you think about scaling up from 100 qubits to 1,000 to 100,000, you think about scaling up connectivity. There’s certainly engineering work that has to happen to get there. We don’t need breakthrough scientific discoveries on that part. We have a high degree of confidence in our ability to continue to scale to drive that growth in qubit count, to drive that growth in connectivity, to start to do some really interesting things with the form factor over the next several years.
You’re going to have this first machine, Hilbert. You’re going to have it available initially with some kind of private cloud before it’s available with one of the bigger environments?
Yes, that’s right. That’s just from a timing perspective. That cloud will be a ColdQuanta cloud that will enable customers to connect, to run circuits. We’ll be supporting Qiskit and Cirq from day one. The customers will be able to submit circuits, run those circuits on the quantum computer, and we’re in conversation with most of the major public cloud providers about integrating it into their platforms as well.
That’s pretty exciting, because up until now, using Cirq hasn’t really gotten you hardware access on the back end. It’s been very much “Look how pretty in my simulator” — that’s about it.
There are languages that people will already know. Do you envision what access would be like? Will it be some kind of like per shot system — if you have to run it, let’s say, for 1,000 shots or something like that?
Yes. We’re talking to customers right now about preferred business models, and what we’re hearing from customers that want to do real work is access for either on a dedicated basis for some period of time, or block-based scheduling, seems to be the preferred method, but we built the system in such a way that we can be agnostic to the overall business model and building structure.
You might be able to set up like an hour if someone needed it.
You can imagine, because of the sizing, that one day, people could have one in their own environment and then do something dedicated, because I think real time is a big issue. When we’re talking to customers about use cases, all these real-time ones keep coming up. We just don’t see a path for it right now. For example, fraud detection — we’re working on better analysis of what might be credit card fraud, but if you don’t have 24/7 access to a machine, you’re not going to be able to implement something like this. Do you see a time where you’re able to mass-produce and get these machines in the hands of large customers?
The question of moving from today’s model in quantum computing, which tends to be — and this is true irrespective of the industry vertical — very much a project-based kind of model to one where the systems are really used in production is one of the really pivotal transition points that’s going to happen here. Qs soon as the next year, we’re going to start to see this happen initially in some limited ways, and then, once these have been demonstrated, really scaling that up.
As you think about that then, the real model that will come to dominate in the short term is one where quantum is a piece of a classical production workflow and where elements of a problem are being handed off to the QPU as part of this broader production system, but ultimately, that QPU needs to be available, as you say, on a 24/7 basis. Whether that is a dedicated capacity that’s running in the data center of the quantum hardware provider that is dedicated to that particular production process, whether that is a machine that is built and running on-site within the data center of a more traditional hardware provider, or ultimately over time, where these machines are mass-produced, and you see them running at the network edge, certainly, in the short term, we’re going to have some very interesting approaches that will emerge as these machines find their way into true production systems. We’re engaged to some of those conversations right now today.
If you roll the clock forward and say, “How does this play out kind of over the remainder of the decade, maybe in the five to 10-year horizon?” then, absolutely, you’ll start to see quantum computers take their place in the data center within the network because of the amount of the process and cycle that will get handed to the QPU will be an increasing part of the solution set.
I’ve become quite fond of this analogy in the AI space — what we saw happen over the last decade in the development of AI was that there was a confluence of factors: There was the development of the hardware — the emergence of GPUs and, ultimately, of TPUs that were capable of handling complex, linear algebra calculations that were at the heart of AI. There was the emergence of algorithms — tremendous development of AI algorithms — and an explosion of data for training. It was those three factors that led to the growth and development and, ultimately, the business impact of AI and machine learning.
In the quantum computing area, there’s a similar trifecta: There’s, one, the development of the hardware and ColdQuanta is absolutely a part of that new modality — higher qubit counts and higher connectivity — and the whole industry is pushing in every part of the parameter space on the development of the hardware. The second is absolutely the development of algorithms — some tremendous work happening on the quantum algorithms — and then the third important one is the development of use cases, and that’s the area where we’re just starting to scratch the surface of what the use cases are not just on these stand-alone problems but also, ultimately, in production.
Where can these devices add value as part of a real business-production workflow in the near term? I think that’s where you’re going to start to see the innovation happen over the next year to a couple of years in concert with the innovation and hardware and the innovation of algorithms that will start to deliver some real business value.
For now, it still feels very much like quantum computers are just daily run. You can have a use case where it’s your end-of-the-day settlement or this one optimisation of your portfolio: You’re setting it up, you’re doing it and then you walk away because you don’t have access all the time anyway. The use cases will help determine where that goes. Have you been able to measure how the performance compares in any way? We could talk about quantum volume, we could talk about algorithm of qubits — things like that — or maybe even benchmarking. Did you guys use any kind of real-world analysis like that?
The benchmarking question is a really interesting one. We thought a lot about “What is the right benchmark?” There’s quantum volume; Atos has some interesting work with Q-score and others. Ultimately, what you will see happen is, it will be a suite of benchmarks because different benchmarks are measuring different things to a certain extent. Some of it is self-interested — choose the benchmark that puts your particular device in the best light, even if it may not actually represent anything that has real meaning from an end-user perspective — so, ultimately, we’ll see. We’re part of a number of these consortia that are working to figure out the right set of benchmarks. We’re comparing these devices in an apples-to-apples way. We’ll certainly be running benchmarks and issuing results on benchmarks once Hilbert is up and operational.
There haven’t been any initial guesses as to how it compares with volume or anything like that?
If anything, they will be guesses at this point, and so we’d rather share real data than supposition.
For me, it’s benchmarking all the way. I would like to see, going forward, how many qubits, and some kind of benchmark, or something like that.
But importantly, it has to be the right benchmarks, and it has to be multiple benchmarks, because you can always choose — you can create a benchmark, or you can choose a benchmark, but the question is, “Does that really mean anything?” The same thing is true in classical computing. It’s not just one benchmark that the industry has gravitated around, but it’s a whole set that ultimately give you a full sense of the value and the utility of that particular device.
Let’s talk a little bit about the software interface. You said we could do, let’s say, Qiskit. Do you have any kind of simulator that you’re going to create so people could test their circuit and then apply it directly to the machine?
We’ll be releasing a simulator in advance of Hilbert’s release, initially making that simulator available to customers, and that simulator will have the set of qubits. It will have the connectivity parameters. It will have the gate set, the noise characteristics, to enable customers to run their circuits locally prior to running them on the hardware.
You’ll be able to run the simulator — you’ll be able to download it and play around with it?
Will you able to simulate all 100 qubits? Usually, that seems to be a problem — going above a certain amount on a laptop, maybe 20 or whatever.
It will be obviously limited by the processing power of the device that you’re running on — simulating 100 qubits would be challenging computationally.
I don’t think it’s possible.
You let me know if you have a laptop that’s that powerful. I’d like to get one as well. Also, one of the interesting aspects of what we’re doing with the cold-atom approach is that we can do certain kinds of gates that are just not available in other modalities — for example, global gates. If you think of an algorithm like QAOA that starts with a high demand across all of the gates rather than having to apply those high demands qubit by qubit with ColdQuanta, you’ll be able to apply a single global high-demand gate across all of the qubits in the circuits simultaneously. Those are the kind of things as well that we’re building into the simulator to enable customers to really test out “How would my circuit, how would my problem, actually be applied with Hilbert once it’s up and available?”
Will this mean that you’ll have some kind of customised tutorials, because right now, if you go to Qiskit, you’re going to have the experience set up for the machines you might be running them on, and the simulators might burn out. Will there be some different ones that you’ll introduce to the community?
Yes, we’re in the process of developing the programming guide right now.
It sounds like it’s going to be just a little different. Because it’s Qiskit, for example, IBM does a whole bunch of reorganising and sometimes rewriting their code before it gets down to that pulse level. Is there then a layer in the stack? Do you guys have something similar? Is that completely rewritten, completely new? I’d imagine it has to be to optimise this, to actually control these qubits in the end.
That’s right. The pulse-level controls — obviously, we’re using lasers, we’re using Rydberg excitation. This is not microwave pulses issued to transmon qubits. Yes, under the hood, it’s operating in a very different way.
Then, thinking about that, with IBM, you can go in and manipulate that if you’re more of an advanced developer. Will there be levels of that kind of extreme customisation — like going down and tweaking — or is that in the future?
That will be something that we will deliver as part of the road map. It won’t be a day-one capability, but certainly something that we’re looking at for the near term.
They did a lot of work recently on getting all their software to run faster. They claimed 100X speedup recently in the running time.
Yes. What’s interesting though, if you look, and you talk to customers, what we’re hearing is, “What we’re looking for is greater abstraction, not greater individual pulse-level control.” I think there certainly is a market for that. Some of the more forward-leaning and advanced organisations that have teams of quantum experts in-house who want to get down to that level of control — certainly, in the research community, a tremendous amount of interest — but as you think about how this scales up more generally, and you’re talking to companies that are looking for applications of quantum in their business, in fact, they’re moving in the other direction.
We’re seeing tools now that are starting up here — Strangeworks, the classic QAD, some others that are hoping to speed things up and make it easier for developers to write and forget.
Are you talking to any of those guys about tweaking the back-end targets on their tools or anything like that?
We’re talking to all of the major software providers and platform providers. We’ll be making some announcements there in the near future.
That’s great. That’s important. If everyone gets too lost in the weeds, this industry will just stagnate.
Absolutely. It will take some while before quantum computing is just another tool built for the average programmer, for the simple reason that even for folks who are fairly well-versed in quantum computing being able to explain “What is a quantum computer good for?” “How is it different from a classical computer?” “What are the algorithms that can run?” “Where are the best applications?” It’s still a challenging questions to answer. There is a real place for those intermediaries to provide that abstraction layer that enables algorithm developers and programmers to incorporate quantum into the workflow in the right way in the right place, and then to be, to a certain extent, agnostic about the underlying complexity.
There’s still tremendous work to be done there, and value to be created. We have a team in-house of algorithm experts that work with customers to implement circuits, to develop circuits, but there’s a tremendous amount of value that comes from the interface with these software providers, because there’s such a tremendous demand for that capability that would vastly outstrip supply at the current time.
By the time this airs, the simulator might be imminently available or launching soon. When the actual hardware launches, will you be going public with — you don’t have to say the name now, but would you be going public with some company that’s already tried some use case secretly? You hear D-Wave like, “Oh, Volkswagen — cans of paint.” Is there some out-of-the-gate thing? Of course, it’s not a requirement for hardware. I was just curious if there was something like that that would be appearing.
That’s certainly the plan. We have a lot of interest in using Hilbert, and testing circuits on Hilbert. As I say, our principal focus right now is standing the service up, making it available, but at the same time, engaging with customers to start to look at potential applications.
As soon as we hit Stop on the recording, you’re going to tell me the customer. I’m kidding, but I’m sure there’s a couple already.
Maybe I’ll come back in another episode and talk to you with the customer about the actual results.
Yes. That’s, of course, of interest. You’ve been doing some internal playing around. Do you feel like there’s any one use case right now that you can just abstractly talk about that shows promise, that maybe, even in this very iteration of Hilbert, might show some kind of advantage soon?
Well, I’ll separate the question of approaches that show promise from the question of advantage, because the question of advantage is a much deeper and more nuanced question. In terms of the approaches that are showing promise, I think the two that we’re most excited about in the near term — and this is not massively dissimilar from others in the quantum computing arena — is in terms of optimisation problems, and by virtue of the fact of the large qubit count, and some very interesting problems in quantum chemistry that we’re looking at right now.
Are you going to have anything — let’s say, like, D-Wave has their hybrid approach, which is where it has classical computing built in that parses some of the jobs over to the machine. Are you going to have something dedicated and built like that, or will it all be up to the customer to develop that side of it — whatever has to be done classically as part of like a workflow?
Yes. Our principal focus — or, I would say, maniacal focus at this point — is building world-class quantum computers and driving as hard as we possibly can to scale, and to scale in the dimensions that I’ve talked about. In terms of the classical components of the workflow, we’ll be doing that either in partnership with customers that have preexisting classical systems and classical infrastructure or with partnerships with providers who will take the classical part of the infrastructure and look to ColdQuanta to provide the quantum part of the solution. We won’t be building our own classical environment, but rather will partner with others to do that. That’s for the simple one and important reason that there is so much work to do in building and developing and scaling quantum computers that we need to remain focused on that tremendously enormously task of building and scaling in this really exciting field.
It sounds like just because of these different approaches we talked about, you might already have some optimisation to the optimisation problems in the form of how your Qiskit setup will be, the types of gates you can do, things that are unique to your machine. I feel like with the software stack, there’s always so much more you can get out of the new piece of hardware. I always like to hear what folks are doing in that respect. This sounds like an amazing machine, and I can’t wait to get my hands on it. I want to try and get access to it as soon as possible — I know that. At Protiviti, I know we’d love to try some experimentation on this thing, definitely.
I would love to get you on it, and I’ll extend an invitation if you’re ever out in Boulder, Colorado, or want to make the trip out there. We’d love to have you, show you the lab and show you the computer in operation in the flesh, so to speak. It’s a tremendously impressive feat of science and engineering to actually see. It’s one thing to come online or web interface — and then submitting jobs is quite another, or to actually see the machine in operation.
I’m going to take you up on that because it is a physics nerd’s dream come true to be able to say, “He moved through Hilbert space.”
With that little silliness, I’ll let you go. Thanks again, Paul, for coming on, and talk to you soon.
Its very nice talking to you. Thanks.
Thanks to Paul Lipman for joining today to discuss ColdQuanta. Thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in quantum computing services at Protiviti. You can also find information on our quantum services at www.protoviti.com, or follow ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.