Transcript | Trapping Ions for Powerful Quantum Computing
It’s hard to improve the purity of an atom. Identical and easy to find, atoms such as those in ytterbium can make flawless qubits. We only need to be able to trap and control them. Can using trapped ions as qubits therefore yield the most powerful quantum computers on the planet? How scalable is this approach on the road to quantum advantage? Join host Konstantinos Karagiannis for a chat about trapped ions and the creation of IonQ with industry pioneer Chris Monroe.
Guest Speaker: Chris Monroe from IonQ
It’s hard to improve on the purity of an atom. Can using trapped ions as qubits therefore yield the most powerful quantum computers on the planet? We take a look at this approach and how it led to the creation of IonQ in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.
If you’re a nerd like me in the physics world, our guest today needs no introduction, but I’ll do it anyway. He’s the chief scientist and cofounder of IonQ, Chris Monroe. Thanks so much for being here.
It’s a pleasure to be here. I look forward to our discussion.
Please tell our listeners a bit about how you first got into quantum. You’ve had such a long career in this. I’d love to hear the story.
I have to say I backed into the field about 30 years ago with Dave Wineland and NIST in Boulder, Colorado. We were the research group of the atomic clock division of the government, building, playing around with individual atoms, making precision measurements, and it turned out that to make better clocks, it would help to entangle the atoms so that the clock ran with better signal-to-noise — a little esoteric and very recherché.
As it turned out, we were building the first quantum logic gate and what we would later call a baby quantum computer, but those words, we didn’t know at the time, and the field just exploded in the mid-’90s when Peter Shor’s algorithm hit, showing that the quantum computer can do something interesting. Up until ’95, the field had been on the fringe. A few notable folks like Richard Feynman and many other physicists played in this field. It was on the fringe — very few publications. With Shor’s algorithm, again, there was a killer app right there, and Dave Wineland and I, we totally rededicated the team and moved in this direction fast.
It’s been a long ride. in the last 10 years, it’s been super interesting, because industry is now playing a huge role in the field — rightfully so. After a couple of decades of fundamental research, it’s time to build a real device — not just something that hangs together with glue and tape that has to be run by a team of graduate students, but a computer has to be used by a third party to make it useful. It’s an exciting time with all the companies building different types of hardware, and people are still looking for more applications, but I got started early, more on the academic side of things, but with IonQ, my cofounder and I, Jungsang Kim, we were having a great time translating laboratory research into real products.
And of course, working with Wineland must have great — he ended up getting the Nobel Prize in Physics in 2012, and that triggered this last decade of everyone being so super interested in this area. I was doing quantum computing in 2012, so I remember how exciting that time was. You also got to work on teleportation.
Yes. Teleportation, actually, in practice, it’s the way we move quantum information around, like it or not. It’s a bad word, because when you think of teleportation, we’re thinking of Captain Kirk and his 1026 atoms all being teleported, and it’s fairly safe to say that we’ll never be doing anything that complex. But when you’re playing around with just a few atoms and moving the information around, that’s the trick. Teleportation is not moving matter. It’s not moving mass. It’s moving the information encoded in that mass, so it gets rapidly philosophical, because what makes Captain Kirk what he is is not all that carbon and calcium and iron and oxygen and so forth, it’s their particular interactions and the information encoded in that.
And the problem, of course, is that if we knew how to deal with encoding a human in terms of zeroes and ones, we’d be a lot further along in lots of different avenues, but it’s much too complex a system to think about. But teleportation is a valid term for moving around information, and indeed, about 10–15 years ago we, for the first time — I say it with a straight face — teleported a single atom over a distance of about one meter, and of course we had two atoms at both places, and we moved the information encoded in that atom from A to B. I am saying it with a straight face because it’s called teleportation, but we never moved any matter. It was all the information. I feel comfortable now with that definition of teleportation, and it actually fits well.
And, of course, if you had a Star Trek teleporter, the easiest way to kill Captain Kirk would be to change the color of his shirt to red before he arrived at the planet, and that would take care of him.
So, we have the basis for gates and teleportation, and this leads us to the development of IonQ. Do you want to tell about how you got that started?
Trapped atomic ions are one of the leading platforms to build a quantum computer. You mentioned Dave Wineland. When we were thinking about the out game — this is the 1990s — we were thinking long-term, this is going to be a computer. It’s got to be solid state.
We know that Moore’s law gave us billions of transistors on a little chip and all the wonderful things we’ve learned about how to engineer that type of a system. We need to figure out how to translate the solid state, and I have to say, it’s been 20–25 years, and the solid state systems, I’m not sure they’ll ever work, to be honest, and part of the problem is, you need nearly perfect surfaces and materials. You can have zero defects to keep the noise levels down and there’s a lot of wonderful research that’s pushing the limits on that, but in terms of building a system right now that’s usable, say, in the next five years, I just don’t see it.
We’re playing around with these individual atoms, and if you saw the lab, you would be horrified, just because it’s huge — optical tables, lasers everywhere. Like I said, duct tape and glue. Things are just barely working, with an army of grad students and postdocs, but a lot has happened in 20–25 years.
And again, I’ll pay homage to Jungsang Kim, who’s an engineer and the cofounder of IonQ, who looked at this technology and said, “You know, they could use a dose of engineering. They could use lasers that are tiny that work all the time, so you just plug them in and they work like black boxes,” and in these intervening 20 or 30 years, ion traps right now, the main challenge to scale is entirely engineering. It’s not about breakthroughs in research.
When we started IonQ in 2016, we certainly had that vision. In fact, we had a very concrete and specific road map, an architecture to scale arbitrarily big quantum computers: It would require a lot of money. It would require a lot of engineers. Not so many physicists, but more engineers to build systems smaller and cheaper, and the journey at IonQ, that was six years ago. We’ve built six generations of systems, and three more are on the way. Every one of them is getting better and more powerful. It’s great because even though I’m a physicist by trade, I wouldn’t say that we’re doing all that much physics in the building of these systems.
We’re using them for wonderful science — not just physics, but other areas: chemistry and even logistics, finance, many, many different areas — so, there’s always going to be science to be had, but not at the qubit level. We’re done with the qubit. We’re never going to improve a single atom. We don’t have to manufacture it. It’s given to us. You can replicate it with absolute perfection. That’s not a surprise, and if you have the same isotope of the same element, this is the recipe for scale. So, what we learned from the ’90s until now is that this is what companies, where their sweet spot is: doing the engineering, going through engineering processes, making a standard system that is scalable and that can be used by a third party.
We started with private funding from venture capital in 2016, and the big event in the last year was, we’re now a public company. As of October 1 last year, we’re listed on the New York Stock Exchange, and that’s been quite a ride. We have a quarterly earnings call coming up, so remind me not to tell you too many deep secrets before that. But with that, being a public company, we have a big bank account allowing us to prosecute this road map over the next several years.
I was going to ask what that does. You were the first that was just purely quantum that that was publicly traded. What does that do to the research arm? Are there any other forces at work now when you’re developing products?
Oh, yeah. We’re growing like crazy now. Right now, we’re probably at 170 employees. We’ll probably be double that in a year, and we’ll have more offices than the one we have in College Park, just inside the Beltway of Washington, DC. Our head of engineering was the former head of R&D at Blue Origin, the rocket-company spinoff from Amazon. Dean knows his quantum physics. He has a Ph.D. in chemical engineering, but that’s not what he’s worked on for the last 15–20 years. But we want engineers. We want engineers that understand the process. They have to understand how to build our system, but that’s easy.
Once you get around the fact that we’re controlling atoms with laser beams in a little vacuum chamber, it’s all engineering at that point, so we’re building a huge team in R&D. We can afford to take lots of risks now and look at different avenues that frankly, as a small company, we couldn’t afford to go in those directions. We’re also spinning up a manufacturing team, a production team, because there’s huge demand for our systems.
I’d always thought that, at least for the many years of the field, people would use quantum computers on the cloud because, after all, they’re actually low-bandwidth devices, both inputs and outputs, so it’s perfect for a cloud. The quantum computer itself has high internal bandwidth that’s looking at all these massively parallel configuration spaces. But the output is fairly simple and the inputs are fairly simple, but what we’re learning is, there’s a huge demand for people to actually get machines in their building for various reasons — maybe they don’t want people knowing what they’re running on them.
So, we’re spinning up a production team that will not make prototypes, but will actually make several copies with high yield of quantum computers that we can then deliver. That’s going to take a little time to spin up, but we’re expecting in a couple of years to be able to send them out the door.
That’s interesting. I always view it as something that just stays on the cloud for constant improved abilities and machines. You can always get the very latest, but like you said, there are some secrets. There are still companies that won’t use the cloud for anything.
Certainly, the U.S. government and other governments are going to be major users. They have many applications, of course — not just in data security and crypto, but also in logistics and other things, and so there’s interest all over the place from governments, and they’re very stingy. They want to have their own system. They want to learn too. To their credit, they want to get under the hood if they can, but learn how to programme these things and learn how to operate full-stack from HAP all the way down to the hardware side.
I’d love to talk about the state of the art in IonQ machines. Let’s start with the best that we have right now. Did you want to basically go with the top?
We recently announced our sixth-generation machine. It’s called Forte. To be a company, you have to have cool names for your devices. Internally, it’s a very boring thing, like C3 or something like that, but Forte is a device that has a capacity of about 32 qubits. And what’s very important at IonQ is that when we unleash a system with some number of qubits, we need to be able to do deep circuits on those qubits. If we have 30 qubits, you need to be able to do roughly 1,000 operations. That’s n2. If you have n qubits, you need roughly n2 gates, and this is generally true because you need to pairwise entangle all possible pairs, and if you have 30 things, there are actually about 500 possible pairs of 30 things.
So, that’s how you want to scale quantum computers, and this is also very difficult because as you make a system bigger, you have to make the components better — if you make it bigger, it has to be better — and quantum doesn’t like that. When you make a quantum system big, fundamentally, a quantum system that gets too big becomes like Captain Kirk. It becomes not quantum for sure. Now, we’re not talking about 1026 qubits here. We’re just talking a few hundred qubits, and that’s the level. If we can get a few hundred qubits and a few 10,000 operations on those qubits, that’s where we’re going to see some interesting applications come along, because you can’t touch that simulation with a classical computer. That’s where quantum computers will hit pay dirt.
This Forte system, it has a capacity of 32 qubits, and the fidelity of the gates is in the high 99% — 99.7% is the average over all possible pairs. That’s where you want to be if you have about 30 qubits. That’s our most powerful system. In fact, I would go on a limb and say it’s the most powerful system in the world by measuring how deep a circuit can run.
And on that topic, let’s talk a moment about algorithmic qubits and how that compares. Are these purely physical qubits, or algorithmic?
I’ve set it up nicely, because remember this n-n 2 relation. The count of algorithm at qubits is those qubits that honor this n2 squared scaling. Let me give you an example. If I give you 1,000 qubits, but I can only do five operations, you might ask, “Why did you go through the trouble of having 1,000 qubits?” If you only have five operations, the circuits are very shallow — very low entanglement. You probably only used a couple of qubits effectively. In that case, the algorithmic qubit number would be very low, and in fact, in a math formula, the algorithmic qubit number is the lesser of the physical qubits you have and the square root of the number of operations you can perform, if that makes sense.
If you have 1,000 qubits and five ops, you only have a couple of algorithmic qubits. On the other hand, if you have two qubits, physical qubits, but you can do a gazillion ops, you still only have two algorithmic qubits. You don’t want to skimp on either physical qubits or number of ops. At IonQ, as we build our systems, we’re building algorithmic qubit number.
Now, I didn’t talk about error correction. Error correction is something that’s folded in when you need to get more depth. At IonQ, we have some error limits in our gates. We probably won’t be able to do more than about 10,000 or 20,000 hops before there’s an error that we know about that will creep in. The good news is, doing that many ops allows us to, as long as we have enough physical qubits to spare, encode in a very smart way with an overhead of maybe 16:1 — we like powers of two. With an overhead of 16:1, you can add another nine to your fidelity. That is, instead of 10,000 ops, you can now do 100,000 ops.
So, if physical qubits are cheap and you have a lot of them, you start to do error correction. For us, with 30 qubits or so, if we want to get more than 30 or 40 algorithmic qubits, we’re going to need maybe 1,000 physical qubits with 16:1 overhead. I’m throwing out a lot of numbers out there. Powers of two are great: 1024 physical qubits, 16:1 overhead. That gives you 64 encoded qubits, and those 64 encoded qubits can now do 4,000, 10,000 operations. Again, that’s what error correction does, and then the algorithmic qubit number would be 64 in this case.
To clarify for listeners, does the term logical qubit equal an algorithmic qubit for you guys?
A logical qubit — that’s good enough. The logical qubit has to have depth, and you have to have enough of the logical qubits. Again, it’s the minimum of the number of logical qubits you have and the square root of the number of ops — something like that. I should note that not all applications necessarily follow this n, n2 scaling. Shor’s algorithm, or the factoring algebra, for instance, is probably a little worse. It might be n2 log n. Other applications — some optimisations — are only linear, actually. So, if you’re working on algorithm at qubits, you can maybe do things in optimisation much better. Algorithmic qubits is a very conservative standard, but what’s important about that metric, the algorithmic qubits, is, it’s a user-based metric. It’s based on algorithms. It’s based on running all kinds of different benchmark algorithms, and testing all of them with your machine. It’s not abstract.
When you’re taking all that into account and the way the machines are advancing, what does your road map start to look like?
Over the next few years, if you look at the algorithmic qubit number, we’re just adding a few algorithmic qubits every year, which doesn’t sound too impressive, but again, we’re building a physical qubit factory because we’re getting ready to unleash error correction. Over the next few years, we’re slated to hit in the high 20s, algorithmic qubit numbers, but then, by 2025, where we start doing error correction, the algorithmic qubit number is going to start jumping up to like 64, and then, what I haven’t talked about is our technologies to scale. You can’t just throw another atom at the machine and it’s going to work. You have to group them in a modular fashion, and we have two technologies in modularity that we’re exploring right now and starting to deploy.
Now, we didn’t talk about any details, and we probably won’t about how ion traps actually work and the gates, but like I said, this theorem, you can’t have too many qubits. Otherwise, noise just starts getting in. It’s also true with ions: If you put too many of these atomic ions in a single chain — and you should think here of a bunch of masses connected by springs or something — if you put too many masses connected by springs, things get sloppy and you’re more susceptible to noise. We want to stop doing that when we hit about 48 or 64, or something like that. It’s an arbitrary number, but we’re not going to put thousands of ions in a single chain. So, a chain of 64.
Well, we can make many chains of 64 on a single chip. We call this a multicore architecture, and we at IonQ, and many others throughout the world on the research side, have shown how you can move ions around, preserving their coherence. And the idea is, by brute-force moving atoms from one chain to its neighbor, that’s how you communicate — it’s a very direct way of communication — and then, within a chain, you can use your lasers and so forth in the usual way.
That’s our phase one of modularity, and that should allow us to get many hundreds or maybe 1,000 qubits, physical qubits, on a single chip. Now, beyond that, this is where it gets fun. We can map the information inside an atom — the qubit — onto a photon, and the beautiful thing about optical photons is that they can go through fibers — they can even go through air — but they can be undisturbed. They’re a wonderful quantum system.
The problem is, they’re not a very good memory, because they go so fast, and it’s very hard to hold on to them, but we don’t want to hold on to them. We want to use photons to communicate. So, we have the blueprint to scale up almost indefinitely, like a data center, by having all these chips connected by optical fibers and a big optical switch. The great news about that scheme is that everything needed has nothing to do with quantum. Fibers have nothing to do with quantum because it’s a chunk of glass. Optical switches, nothing to do with quantum. It’s used for big classical signals to pipe around internet packets right now, but it can also be used for single photons.
The neat thing about this modularity of scaling is that it allows us to arbitrarily connect any chips. I’m envisioning a big data center where each rack contains maybe a few thousand physical qubits and a big optical switch that connects them all. That’s going to take a lot of money. It’s going to take a lot of integration, miniaturising, and it’s also going to require that our cost per qubit goes down dramatically. This is where we launched this production group within IonQ — we’re going to start to see that in a couple of years, so that’s the scaling. Again, we’re not limited by physics here. We’re limited by engineering and money.
Listeners of the show should recognise what you just described as the basic principle of interconnect, and that’s the idea here — having a bunch of modules that work together either to make one machine or have many machines work together as one. So that could greatly accelerate what you could do on the timeline if you have a few of your most powerful machines working together.
Indeed. I would go further and say anything complex is modular. We just see that. Think of the airline hubs. You’re not going to be able to fly from any city to any other city. It’s more efficient to make hubs to make it a little more modular. Of course, classic computers already have multicore modularity on a single chip, and that’s like our multiple chains. And then they have these data centers — modular racks that you can swap in and out. You’re absolutely right — not only will it make it more powerful, you also have to. That’s the only way to scale. There’s no proof of that, but just heuristically, this is how things get big. You sacrifice connectivity, but you get to scale.
Do you anticipate ever seeing interconnect between different kinds of machines — like, for example, yours and a different trapped-ion or yours and a transmon, or something? Do you think that would ever be able to happen?
Certainly, between different trapped-ions, that’s actually not just a cute thing, it’s actually also going to be necessary for many reasons. To drill down very briefly, this photonic interface — how you move information from an atom to a photon — requires that you blast that particular atom with a bunch of laser pulses. And by the way, there’s another qubit a few microns away, and you can’t touch that qubit with a single photon from a laser beam. A laser beam that we need has 1023 photons in it — has a lot of photons — so you need a different qubit to do that, or a different isotope.
There are many ways to run that architecture, but we already are looking at different types of ions, but that’s an easy one. Atomic ions, they have the same charge, and you can couple them together in a very obvious way. What you say at the end there, coupling it to a superconducting qubit system, that’s hard, and the most promising way is, again, through an optical interconnect. With atoms and ions, we have a natural conduit to photons because the energy separations in atoms are typically in the optical, but the energy separations in superconducting systems are in the microwave domain, and so we need to convert microwave photons.
If you’re happy with microwave photons, that’s great. The problem with microwave photons is that we don’t have optical fibers, and the wave guide with which you transport a microwave photon has to be held nearly at 0 degrees Kelvin. So, we’re not going to have networks of microwave photons running around. You need to convert to optical.
There’s wonderful research doing this. They involve nanomechanics — actually having the superconductor talk to a capacitor that has a moving electrode — and that moving electrode, you bounce a laser beam off the moving electrode and that’s it, but the interface there is very recherché. There’s a lot of wonderful research going on in that direction.
I don’t think we’re anywhere near a system deployment on it. I have to say, honestly, I don’t see the next 10 years as playing any role for a realistic system. Atoms to ions might be a little better because you just have to connect different optical wavelengths. Atoms are very discriminating. They like their own wavelengths to eight or nine digits, so you’re not going to get lucky and find two different atoms that have the same wavelength. You need to be able to interconvert between wavelengths, and again, there’s a lot of wonderful research and devices that do that.
That’ll probably happen first within the realm of atomic qubits, broadly speaking, then maybe quantum dots. These are solid-state optical qubits. You may have heard of Diamond NV, which is a particular neat quantum system. It’s a defect in diamonds, and it moves off this nice red photon that you could cook up to an atom. That looks more promising. I would add, you need to be motivated. I don’t see the motivation just yet.
If you can share, do you know of any other technologies that you’re keeping an eye on that might affect the timeline besides this improvement interconnect capability?
In terms of tech, from a high level — at IonQ, we build quantum computers. We’re not an ion-trap company, even though ion is in our name. If there’s another platform that looks great, we’re going to go after it.
If topological continues to show some promise, you might delve into that one day?
Yeah. I’m going to be dead before that happens, sorry to say. I love the ideas behind it. It’s beautiful research, but in terms of a real system, I don’t see it happening in my lifetime.
So, maybe photonic instead?
You’re right. It’s a fast-moving field in the area of photonic devices — integrated photonics. We’re not doing much of that yet. Our ions are trapped on a chip, actually. They’re suspended above a semiconductor chip, so you might ask, “What the hell are you doing not putting photons on that chip, and your wave guides?” and yeah, we’re working on that. I’m not giving away any secrets here. There are wonderful devices, lots of startups out there that are playing around with these devices, so that’s definitely in our future.
I mentioned cost. If we’re going to bring cost per qubit down, we can’t have lenses and lasers that are separate. It all has to be integrated in the long run. There is a path to doing that, so it’s very exciting.
Great. I’d love some of your predictions about how you think the next few years might go — any thoughts about when we might see provable, benchmarkable advantage, and if so, what use cases you think might see it first? I know you guys actually focus on some use cases too.
We have a prefiber and applications team for many reasons. One of them, of course, is that the expression of our gates is quite unique, and users that are going to run algorithms, they want to derive every ounce of efficiency they can, so they need to know of the language, the gates, the connectivity graphs and so forth, and so our apps team works with people in companies, and researchers, that want to run certain algorithms.
Now, the one word you said in your question is “provable advantage.” That word provable is interesting because the great thing about Shor’s algorithm, the factoring algorithm, is, it’s provable. Once you get beyond some of the background math, it’s just a couple of lines. It’s very elegant, and you can show that the complexity changes from exponential to polynomial if you use a quantum computer, which is a big deal mathematically.
Now unfortunately, almost all other applications, you can’t — or, many of them, you can’t —prove such a change in complexity. The first applications, to answer your question, will be in the form of heuristics. They will be use cases like an optimisation problem or a better approximation to the traveling-salesman problem that you can’t prove that you got the best answer. In fact, we don’t think quantum computers can actually get the best answer in a hard optimisation problem, but if they can do better than classical computers, you don’t need any proof. That’s proof that it’s useful already.
This is a very academic community, and academics love proofs, and so the heuristics are downplayed in the entire field — the use of heuristics to do certain optimisations or even understand certain aspects of molecular dynamics, even on the fringe, something that you couldn’t calculate using density functional theory on the biggest supercomputer out there.
So, these are going to happen, and they may happen in the finance world, where all financial firms have these models. The models may not be right to begin with, but the models themselves are hard, and getting a better proximation, there’s a lot of interest there. I don’t want to sound like I’m overhyping, bringing in climate change, but understanding climate models — the models may not be right, but we can’t even solve the models, so that’s an area where quantum computers might help in refining models in many different areas.
Of course, you brought up Shor’s a few times. Do you have a guess as to when you think we’ll have the theoretical 2n + 2 needed to attack RSA?
My friend, Bill Phillips, also a Nobel laureate, in laser cooling, he used to say it’s a 50/50 proposition: In 50 years, there’s a 50/50 chance that we’ll see Shor’s algorithm. He used to say that 20 years ago. These things always seem 20 years out, but the one thing that’s very hard to predict is, if we find an application anywhere, maybe one of these heuristics that a company uses for some kind of optimisation, the dominoes are all going to fall. That will pave the way to hit Shor’s algorithm. So, that’s important that we find some application sooner rather than later, because that’s going to pave the way to do Shor’s.
Now, unfortunately, Shor’s algorithm, by some counts, requires many millions of qubits, many billions of operations, although there’s research that keeps shrinking those numbers, but those numbers are way bigger than anything we’re thinking about over the next five years. I’m pretty comfortable with saying 20 years, but that’s what I’ve always been saying, but we’re closer to the near-term apps that that could accelerate that.
You see these other approaches that get pitched every once in a while, like the QFT and other things — other ways that might minimise the number of actual qubits.
So, if people want to actually play with IonQ machines, there are a few ways to access them on the cloud now. Do you want to talk about how those compare?
Early on, we got our systems on the AWS cloud, Braket, and the Microsoft Azure cloud. This is a couple years ago. We’re now on the GCP, the Google Cloud Platform. We’re the only players on all three major cloud platforms, and we’re delighted that the Qiskit software-development kits from IBM, Cirque from Google and so forth, and software Q# from Microsoft, they all express gates in IonQ language and IonQ terms — rightfully so. The cloud providers are very interested in getting users to programme these things.
So, indeed, you can get access to IonQ machines. In fact, it’s interesting, as a business, we have to decide, “What are we going to put on these cloud providers?” We don’t typically put our latest and greatest system, because the latest and greatest system gets better every month. If we put a system on the cloud, it doesn’t get better every month, because it’s running 24/7. The whole point is to put it on the cloud so that it’s stable. Fortunately, the field’s moving so fast that we’re going to soon update the offerings on these commercial clouds. But internal to IonQ, we always use our best system — I don’t want to say for research, but we’re still making better, and the apps team will partner with specific companies or partners that will work with us on the latest machine, so that’s what we would call our internal cloud. We have that. It’s not something anybody can get access to, but give us a call.
Chris, thank you so much. This has just been wonderful. I definitely believe in trapped-ion, so this was quite an awesome interview.
Yes. Good. It’s been a pleasure. I love talking about all this stuff, and going back in time as well.
Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.
Chris Monroe has been involved quantum for about 30 years — yes, from before Shor’s algorithm was created. Along the way, he worked on the basics of qubit gates and quantum teleportation, the building blocks of quantum computing and networking.
Chris was and still is skeptical about solid-state qubits because of the potential for defects and noise. This is why he and his IonQ cofounder focused on trapped atomic ions when starting their company in 2016. Ion traps seemed to only have engineering challenges to face for scaling. The purity of the qubit is there — no need to improve on a single atom.
IonQ was the first pure quantum company to go public last year. They’re still growing and hiring, both on the application side and in manufacturing to meet the demand for their systems. Some customers want these machines on-prem instead of in the cloud.
IonQ is also not limited to trapped ions. If they find a better qubit technology, they’d consider making systems with that as well.
The latest system from IonQ, Forte, has a capacity of 32 qubits. IonQ systems prioritise being able to run deep circuits, which is why qubit counts can be deceptive. Forte is one of the most powerful machines available today as a result of qubit fidelity. Chris feels that when IonQ gets a machine with this fidelity to 100 qubits or so, they’ll be able to do some amazing things in business use cases.
To clarify how the fidelity of qubits affects the system, IonQ uses algorithmic qubit numbers to rate their systems. IonQ is also hoping to preserve a 16:1 error-correction ratio for the future creation of logical qubits. IBM has estimated as high as 1,000:1 for logical qubit, so 16:1 is a huge improvement. The algorithmic qubit number will modestly increase over the next few years, but by 2025, IonQ hopes to have 64 algorithmic qubits that are error-corrected, which should provide reliable results to advanced use cases.
IonQ is also planning a multicore interconnected architecture, chaining together these 64-qubit chips. This modular approach could allow for scaling into the so-called dangerous levels of thousands of quality qubits. As we’ve discussed on this show before, little QPU modules can combine into big quantum computers, and multiple quantum computers can work together as one across quantum networking. Chris was one of the first to have this idea of modularity as an approach to scale.
IonQ has a large applications team and is working on improving efficiency of every gate for end users. Provable advantage is tricky because of benchmarking challenges. Optimisation might provide advantage first because results will be easier to compare to classical. Chris feels that once advantage is shown, excitement in the industry could accelerate further advances.
Want to try an IonQ system? They’re available on all three major cloud platforms and can be reached directly from the company too.
That does it for this episode. Thanks to Chris Monroe for joining to discuss IonQ and their bleeding-edge machines, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.