Open source helped revolutionise classical computing. What can it do for quantum? EPiQC is a collaboration across five universities that is developing a range of open-source tools to connect algorithms to quantum computers, including programming languages, compilers, optimisers, and more. They’re also working on benchmarking, which is much needed as different types of quantum systems come online, as well as helping future quantum programmers enter the workforce.
Guest Speaker: Fred Chong, Founder EPiQC
The Post-Quantum World on Apple Podcasts.
Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.
This spirit of open source is alive in quantum computing. A group of universities is working on developing algorithms and software that could greatly improve the performance of practical applications. They’re even working on benchmarking, which is important in identifying the real-world performance of quantum systems. Find out what’s involved in squeezing everything possible out of quantum computers in this episode of The Post-Quantum World.
I’m your host, Konstantinos Karagiannis. I lead quantum computing services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.
Today, our guest is a scientist from the University of Chicago who also founded EPiQC, and he’s also a cofounder of a company that you’re probably familiar with, Super.tech. They were on recently with Pranav. I’d like to welcome Fred Chong. Thanks for coming on.
Thanks. Maybe I should clarify. EPiQC isn’t exactly a company. It’s a research project research center funded by the National Science Foundation.
That’s right. There are five universities involved. The reason that we have so many universities is, we try to cover the disciplines that span a large range of quantum computer science.
Great. Your big hope is to do what a lot of us are doing in the industry — get to a point where we can make practical use of these machines in the near term. That’s the grail here, right?
Yes, absolutely. A key factor that we’re focusing on is building software that connects the algorithms or applications to the physics of the machines.
So, you’re trying to go for more of a device independency? You envision some way that these applications can run on any kind of backend target?
Do you guys, then, go outside of the normal programming approaches, or do you, at some point, hand off to — so, let’s say if you were going to use an IBM target, at the very last step before the programme goes to the quantum computer, it gets rewritten and optimised to make sure there are no mistakes, nothing that will cause some kind of resource waste. Do you still do that handoff at the end?
We tend to hand off at a much lower level. IBM actually does expose a method to essentially get almost directly to their hardware, and so we skip a lot of their steps and try to integrate the kinds of things they would do into the upper levels of our software.
Do you envision allowing this to be accessible, like a cloud environment where people could play in your walled garden of sorts and then hand off to hardware targets?
That would be the end goal. EPiQC is a research project, so all our code is open source and available to people, but we don’t typically maintain services for cloud environments. You can use our code to set that up. The purpose of spinning out Super.tech, which people heard about already, was to take that work and make it in a more usable environment for customers and users.
I saw you have a good hub, and there are six repositories there right now. Do you have anything that’s coming soon there that you want to share?
That’d be great.
Here’s an example, which is a little bit relevant to your question about how we interface with IBM. A typical quantum programme would be translated via our software compiled down to quantum instructions. They look like the assembly-level instructions from microprocessors such as the XAD6, or the Intel instruction set or the ARM instruction set. Normally, we would go down to instructions and then hand it off to IBM, and then IBM would take those instructions and translate them into microwave control pulses. Then those would implement those instructions on the IBM machines, but it turns out that we can build software that directly translates programmes into those control pulses, and that can be 10 times more efficient. That’s an example of how we take a shortcut and get more efficiency.
Yes, we noticed that. Anytime we just try to run things at a default setting, we get one level of performance, and then, when we try to go down to the pulse level, we could see the changes immediately. So, I think that would be more important. I tweeted a while back about Sandia Labs. They came up with Jaqal — Just Another Quantum Assembly Language. I don’t know if you guys consider yours just another Just Another Quantum Assembly Language.
Most of the work that we do does not define another assembly language at all. The work that I described, we essentially skip the assembly language. In other work that we do, we just entirely change the way you think about the machine. A typical quantum machine works on binary information. You have quantum bits that are two levels, zero and one, and so we have done some work where we looked at the physics of many of these machines, and that allows us to represent three levels: zero, one and two. That’s what we might call ternary logic, or qutrits, and that essentially lets you get more out of every physical device that a machine already gives you, and so that also entirely changes the way that you think about how to do computation on a machine.
I’m glad you brought that up, because two things caught my eye, where you mentioned qutrits and benchmarks. I wanted to talk about those. With the benchmarks, would that be what Pranav was talking about when he was on? He was saying that they were working on something to help benchmark systems. Would that be the same code that’s on GitHub?
We’re definitely thinking about how to, as systematically as possible, design benchmarks for a range of quantum platforms and quantum applications. There’s a very rich tradition of benchmarking in the classical computing community that we come from, and we’re trying to bring that to the quantum community, and one of the concepts is that each benchmark hits a set of features that they stress-test in a particular machine, and it’s important to identify those features and then come up with a suite of benchmarks that exercise different features in different ways so that you get the full range of advantages and disadvantages for different kinds of machines.
That’s very interesting. Yes, I remember, of course, the classic benchmarks. It would hit things like typical Office applications, graphics, all those sorts of metrics, but with these benchmarks, then, are you looking to see not just how it handles certain types of algorithms, but maybe also how the T1, T2 times, how those might impact performance, and which machines might be better suited to certain types of applications? Is it going to get that deep, to make predictive guesses as to whether one machine would be better than another at what kind of application?
At some level. There’s a bit of an art to it right now, since there aren’t a lot of quantum applications, but it will be looking to find well-motivated applications that stress machines in different ways, and then there is that connection that you can see that certain classes of applications care about certain things more than others. For example, you mentioned T1, T2 times. Some applications will care about — many will care about what the error rate is on operation, especially two-qubit operation. Some may care about the connectivity of the qubits, and some may care about how many qubits you have. The different machines out there have different advantages and disadvantages of how they scale these different things, and so, we’re really looking to try to map out the space and define the spectrum of features.
Yes, I think it’s important. Especially with quantum volume, when I was first exposed to it, I thought it was an interesting idea, but I don’t know if I love it so much anymore. It feels like, by tweaking certain things, you can get outrageous numbers. Like IonQ, right out of the gate, they were like, “Yes, four-million-quantum volume.” It’s like, “Whoa.” The rest of us at the time were 32 moving to 64, and of course, Honeywell just announced that they’ve hit 512. So, how do you feel about quantum volume?
I think quantum volume was a very interesting metric in the early days of quantum machines. It had some nice things about it. It specifically stresses the connectivity of the machine, which is something that many benchmarks certainly back then did not do. I think that was a great thing, but quantum volume was really designed for the early days of machines, and it has an exponential in it that makes its numerical value really large, and so, as machines improve, that exponential starts making the metrics seem a little silly, and it’s a specific kind of benchmark. It doesn’t have this variety of stress tests that I think we need as we move forward.
At the end of the day, even though they were working with it for so long, IBM isn’t really touting quantum volume anymore as their new story. Witness what they predict for 2023 — Condor, 1,000 qubits. They didn’t really talk about quantum volume. They talked back again to what everyone liked to report on in the early days. So, I wonder if there’s something new that we need to do. Just say, “How many qubits?” and then some kind of quality value or something that tells the whole story.
We definitely need to move forward, and if you think about it, benchmarking in classical computation followed the same evolution pattern, where in the early days, you could just look at the rate that you could run instructions or the rate at which you could run arithmetic or floating-point operations. But as things progressed and became more complex, we had to define different benchmark suites and then potentially come up with scores for well-known suites to summarise the ability of machines to run on those suites. But you really needed this diversity of benchmarks and, essentially, these different stress tests to really get at the proper use of the machine. The honest truth is that, as machines become complex, there’s no single number or even two numbers that you can use to characterise a machine.
Yes, I agree. I just know that, in my gut, I would never feel comfortable telling someone that IonQ’s machine is 65,000 times more powerful than an IBM one. I don’t think I can make that argument. That’s where it starts to feel pretty limiting.
Since quantum volume has an exponential built into it, to make the early improvements sound bigger, you should just take the log of that value to make it a little more realistic.
I agree. Pretty soon, it’ll be like, “Quantum volume of 10700. Yes, we’re set. We still can’t crack encryption, but . . . .” It’ll be something like that.
So, on the EPiQC site, they talk about a physical machine of 100 quantum bits expected in the next three to five years. I feel like we’ve pushed the envelope a little bit. I think we’re going to be a little farther, like IBM — their plans. IonQ’s plans seem pretty aggressive, too. They just went public, and they have some cool plans about that interconnect idea. Can we get these machines to work together? I believe in two years, we’re going to have way more than that. I think we will have about 1,000 qubits guaranteed, pretty much. Well, there are no guarantees, but I think we’ll be pretty close to that. Do you feel that 2023 is this tipping-point year — the year of advantage that we’re going to see?
We hope so. I will say that you point out a shortcoming of our academic project. That EPiQC blurb description that you looked at was written three years ago, when the project started, and so, when you say three to five years, yes, we are pretty close to 100 qubits right now, which is remarkably prescient, I suppose. We definitely hope that, soon, we’ll reach a tipping point. It’s easy — and I think that’s why in the road maps you see this — it’s easy to think about how many qubits we can get. It’s a little bit harder to think about how reliable they will be and how reliable the operations will be, how good the Interconnect will be, but certainly, getting to 1,000 qubits will be very useful. And presumably, some of the vendors will get to a point where the operations and the connectivity are good enough to utilise those 1,000 qubits in a good way. Yes, I am very optimistic that in that time frame, we will get to something exciting.
Yes, that’s why I’m so excited about benchmarks in general. I love the idea. I would like to see something standardised that everyone starts to use. There’s a lot going on right now with IP. Everyone starts developing to a point, and then, they’re like, “No, this is ours.” Then they take it offline. It makes you wonder what’s going on behind closed doors — we don’t know what advances. Everyone’s basing on the same few papers that were written when all this started, and they just go from there with optimisation or machine learning. They’re just drilling in. Do you see any kind of more shared approach to development coming? Do you see any groups forming — alliances or whatever — that will try and push this industry forward a little bit more before we get to the closed-door secrecy that I’m starting to see already?
One way that the industry seems to cooperate fairly well is that the platform- and hardware-centric companies tend to collaborate well with the software companies because they’re in a little bit of a different business. Obviously, the hardware companies, in terms of their device technology, some of that is going to be confidential, and they refine their technologies in their own way. But I think that how we use those technologies, there is a fair amount of openness at the algorithms and software level that allows us to share a fair bit. There’s IP generated, but typically, even with software and algorithms that have IP associated with them, the field is still fairly academic, so you do see the papers eventually about those things, and that does allow you to build upon that work.
There is definitely a move toward commercialisation and some competition, but I do see still a fair amount of openness, and I think there are some hybrid areas in the level of error correction and mitigation, which sits in between software and hardware, and that field is probably somewhere in between where it’s somewhat open, but where it starts crossing into other characteristics, maybe it’s less open.
I know you guys are working a bit on programme verification and validation. I don’t know if you want to talk about that a little bit.
Sure. It’s a difficult area. First of all, I’ll start with the motivation. In classical computation, we have a lot of technology for formal verification of programmes and software, but typically, in the classical world, it’s viewed as a little cumbersome and difficult, and most classical methodologies resort to testing of machines and testing of software, rather than formally verifying very much of the hardware or software. The problem with quantum machines is that we’re trying to build machines of a scale that’s inherently more powerful than classical computers, which means it’s very hard to use classical computers to verify them or the software that runs with them because they’re hard to simulate. And so, at some level, we’re setting ourselves up for a problem in which we can’t test software and hardware very well for correctness, which means that there’s much higher value in terms of formally verifying things.
In fact, we’re going to be pushing the limits of technology and building machines that are fairly noisy and error-prone, and the first thing we’re going to do is, we’re going to run software and applications on those machines, and it would really help us to know, when we hit errors, whether the errors are coming from the hardware or from the software, and so we fundamentally are very motivated to solve this problem.
On the other hand, solving this problem completely is quite hard. In fact, if I could verify a quantum programme very well and that verification involved enough information that I could figure out what the answer of the quantum programme would be, then I would essentially be doing the same thing as the quantum programme’s computation. That means that if the quantum programme is solving something very hard, then the verification should be very hard, or, conversely, if the verification could be done in a short amount of time on a classic computer, then the quantum programme is essentially not doing anything useful.
It’s a very hard problem, but I will relate an experience we had, which was quite positive, which was, we used some verification technology called an SMT solver. It’s a modular-theory solver, and we were able to verify not a quantum computation but IBM’s quantum compiler, and we were actually able to find three bugs in the compiler, which have since been fixed, but were contributed by people through the open source mechanism. So, because it’s an open source compiler and it is a quantum compiler, which is hard to think about sometimes, it’s useful to have a verification mechanism to verify the things that people contribute.
With open source, the idea is, more eyes on code to spot bugs and things, but in this case, you needed nonhuman eyes on code to spot it.
That’s right. In particular, there are certain things that are more subtle in quantum machines that eyes on code isn’t necessarily going to easily be able to find, and it’s hard to test.
So, when Feynman first thought about quantum computers, he was thinking the only way to simulate the quantum universe is with a quantum computer, and now we’re at the point where the only way to validate that we’re simulating a quantum universe is with another quantum computer on top, in a sense. So, we just added one more layer here.
Yes. Ideally, in the future, that may be the case. Right now, we’re so early days that I don’t think we can imagine verifying a quantum computer with another one that’s just as unreliable. We’re really trying to develop the technology to do it with classical computers, but in a limited sense.
That’s interesting. It’s hard to imagine verifying once we blow past the ability to simulate. How are we going to be able to keep up? Something different is going to have to be done. We’re almost at the limit now. Probably, the machines coming online later this year or early 2022, I would say, we won’t be able to simulate. We’re going to be past that point. We’re already going to get to that point where you’d need to convert every molecule in our solar system to a computer to even try. So we’re not even close to having that kind of capability.
Is there anything being done more at the University of Chicago, not even necessarily associated with this joint project in this space? Is anyone working on research projects there to solve these problems, too?
Absolutely. EPiQC is actually just a piece of the very large efforts at the University of Chicago in the quantum space. EPiQC represents, at some level, the predominant software efforts in the country. When you look at emerging technologies and materials and quantum communication and quantum sensing, we have an entire school of molecular engineering that works on these things. In fact, in partnership with Argonne National Laboratory, which the University of Chicago manages, we lead one of the five Department of Energy National Quantum Initiative centers, and those are $110 million centers, which involve hundreds of researchers. So, the efforts in the quantum space are really substantial. In fact, Chicago is the only area that has two of the NQI centers — Fermilab, which is also managed by the University of Chicago, leads one of the other National Quantum Initiative centers of the same scale focused on quantum materials.
It’s really interesting to me. I’m in Manhattan, and I feel like every podcast I do, I’m going to end up mentioning Chicago. So much is going on there. It’s not by accident. We have this strong alliance with the Chicago Quantum Exchange, and there’s a lot going on there. I guess it’s nice to see the Midwest getting its props, so to speak, in becoming this — everyone used to talk about creating a new Silicon Valley, a Silicon Valley in the Midwest. Pretty soon, it’s going to be more important to be this, which is definitely not limited by what Silicon Valley was. Quantum’s going to be — obviously, I believe in it strongly, or I wouldn’t be doing what I do.
There’s definitely an intention in this area to produce a quantum industry much as Silicon Valley has done for classical computation. There is a strong need to train a new kind of scientist and developer that sits at the intersection of computer science and physics and materials, and here at Chicago, they have degrees in quantum engineering, which at the moment are a little bit more focused on the lower end of that physics and materials, with some working systems. But with EPiQC, we’re trying to develop a computer system’s discipline for quantum computer systems, which I think is really a critical need.
As all of these companies in the industry move toward more substantial hardware platforms, they’re discovering that their shortage is in the software stack and how they make use of those platforms, and already, there’s a huge need for people trained in building software systems that are aware of and understand the physics of these platforms.
Yes, and I think that’s going to abstract out a little bit eventually. You have to understand computing to programme back in the early days — everything was so manual, going back to punch cards or whatever. One day we’re going to get to this point where we can have a really solid developer programme. Even if someone just wants a bachelor’s, they should be able, in theory, to take enough quantum courses and coding courses that they’re able to then contribute and adapt use cases to algorithms and be hirable after those four years. I’m already seeing it — we’re already seeing people just being pulled in who had no training. They’re just ramping up in one of the development platforms and moving ahead. Would you agree?
Yes, I think so. There’s definitely a need for that, but I’d also say that quantum computer systems, more than for classical systems, and even for longer than classical systems, will need some amount of understanding of the hardware and physics of the devices for the most advanced developers.
Like I said, whatever that programme is, it’s going to have to have key quantum pieces in there.
That’s right. I think for the next five to 10 years — probably more than 10 years — there will be a very strong need for people with some of that training to get the most out of these machines.
Are you going to be coming in and creating something new — a brand-new way to implement an algorithm — or are you going to be coming in and helping out with something that a company does regularly? That’ll be a big defining difference: Are you going to be that one that can really revolutionise, or just come in? Right now, there are plenty of software jobs where straight out of school, in theory, you can contribute — they’ll just hand you off some little piece of a very known environment, and you’re going to work on that. Whereas with this, right now, it’s still very much the Wild West. You come in, and you say, “How can we take one of these 60 algorithms and make you money within two years?” That’s the idea, and it takes a lot of careful thinking and planning to do that.
Yes, absolutely. Of course, there’s a range of people that will be needed. In fact, probably, the bulk of the people you need are just quantum familiar or still somewhat quantum knowledgeable. But there is probably, as the industry picks up, a fairly substantial number of expert developers that you’re going to need, which is a little bit like the early days of computing, which is where we are for quantum computation. The case that even classical computing is starting to reach physical limits such that there are developers that need to be aware of things like errors in classical computers, even.
Yes, quantum tunneling. Not to go back to quantum again, but in a classical computer, obviously, if you make the traces too small, that electron is not going to be where you need it to be to represent the one or a zero. It might hop the fence and be somewhere else.
Then, even more than the dynamic aspects of it, as you build very small classical devices, you get a lot of variation in the devices, so that the devices aren’t very consistent. You need to start accounting for a machine that has lots and lots of bits and transistors, but they’re not all good ones. It’s an economic issue. You basically can’t afford to only take the perfect chips, and so you have to build software that can account for imperfect chips, and that’s starting to look a little bit more like what we have to think about in the quantum space. Although quantum will progress, and we will scale and we will get much better machines, it’s a little bit hard to imagine a future in which quantum machines will be perfect. At some level, we will always build software that can accommodate for some of that noise and variation.
Well, depending on the week, the perfect quantum machine is the topological computer, and depending on the week, it either exists or it doesn’t. That’s what I’m finding. That’ll be the one we’re all waiting for here, but yes, even if that happens with classical machines, we’re not going to have to be on the ratio of error that we’re dealing with with quantum. You need a lot of qubits to get a good logical one. It’s hard to imagine classical being quite that off in terms of transistors.
In fact, maybe a little bit of a misconception is that just because we have quantum error correction doesn’t really mean that we can run a programme forever or that reliable qubits are free. The length of a programme that you want to run, and the number of qubits you need, really determine the overhead of quantum error correction. You still want to optimise as much as you can. An exciting direction is how this quantum error correction — some of the things that people have come up with in a fairly theoretical setting — how does that interact with some practical things you can do to make the error lower physically on a machine? I mentioned error mitigation. There’s an interface between error mitigation and quantum error correction, which we haven’t really figured out yet.
Well, I think that’ll be a future award in the making, right? Anything in that space.
I guess we’ll wrap up on that optimistic note. Thanks so much for joining. This was really great, and I’m sure I’ll see you at lots of CQE events going forward.
Absolutely. Yes, thanks for having me.
That does it for this episode. Thanks to Fred Chong for joining today to discuss EPiQC, and thank you for listening. If you enjoyed this show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in quantum computing services at Protiviti. You can also find information on our quantum services at www.protiviti.com or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.