The holy grail of quantum computing in the near term is a use case that provides advantage. We believe it’s only a matter of time, but major companies have to plan and prepare now so that they are not left behind when these applications arrive. Join host Konstantinos Karagiannis for a chat about quantum computing use cases with Pranav Gokhale from Super.tech. We cover mostly financial customer approaches, and move into research and intellectual property woes, benchmarking quantum computers, and how to win a million dollars.
Guest Speaker: Pranav Gokhale, Co-founder and CEO at Super.tech
The Post-Quantum World on Apple Podcasts.
Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organizations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.
We’re all waiting for provable practical quantum advantage. How long until companies can start lifting-and-shifting classical computing tasks to quantum workloads? What are some of the most promising use cases to focus on today? And, how can you win a million dollars in the process? We’re talking use cases in this episode of the post-quantum world.
I’m your host, Konstantinos Karagiannis. I lead quantum computing services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of the post-quantum era.
Our guest today is Pranav Gokhale. He’s from a company called Super.tech. We’re going to focus today on near-term applications for quantum computing. What other reason would we do something like this? Why else would we have a podcast about the post-quantum world if we’re not going to look at ways that we could start benefiting as soon as possible? Welcome to the show, Pranav.
Super.tech is spun out of research from the University of Chicago, and we’re also incubated by the Argonne National Lab. We come out of the computer science department, so one might think that we’re just purely software folks. But we’re really paying a lot of attention to, like you said, the other layers, the hardware stack, especially with quantum, where it is today. When we have only 10 qubits and gate fidelities that are not 100%, we have to be very mindful of the interaction between the software and the hardware.
I think this is very similar to, say, the 1960s or the 1970s of classical computing, that is, anything that’s not in quantum. Today, we have the luxury that Python programmers can write their software without ever interacting with the device physics of transistors, but in the quantum world today, we have to be more like 1960s or 1970s programmers, where we have to manage memory, we have to manage the fact that there might be a bug — a literal physical breakdown in the electronics that could trigger problems in software.
Super.tech is a startup that’s trying to extract all that away but also handle it. We’re reaching out to customers both in the hardware world who are trying to make their hardware behave better, and customers in the application sector in finance and energy who are trying to get more mileage out of quantum computers.
That’s right. It’s more of an analogy to a grasshopper — finding a bug whilst doing programming, but of course.
Yes. That would be an extremophile surviving in a refrigeration unit. So, how would you partner? Let’s say right now, the most common interface is this idea of quantum as a service on the cloud. Very few people are buying one of these machines. It’s this idea that you want to have a platform that people are comfortable using, and then they send their code to run on a simulator, or an actual machine on the back end. Of course, the hope is, one day, that that gets abstracted — that you can just write code and it will reach out to quantum computers as needed on the back end. Where would you fit in with something like Microsoft’s Azure quantum or Amazon’s Braket, or something like that?
We are aiming to be very hardware-agnostic, and we view these services that provide access to a lot of platforms as very complementary to our approach. Just as one example, we’re partnered with IBM right now, and some other hardware companies, through the Department of Energy. The idea is that if someone is an energy company aiming to optimally allocate power generation for the next 12 hours, they don’t care if the answer comes from an IBM quantum computer or from a Rigetti quantum computer. They just want their answer.
The stack that we’re building is basically aiming to insulate some of the nasty bugs, so to speak, of the underlying quantum hardware and give an api endpoint to customers that’s very application-centric— that is, find the optimum portfolio, find the optimal allocation of power-generation units to turn on and off every day. That’s the story of where we envision operating.
How do you make the decisions of what back end hardware? Do you then, based on certain types of applications, find that this machine works better than that machine? Let’s say IBM, you can go and literally see, “Hey, this qubit is a bad qubit. It’s less reliable.” So, certain machines just perform better in certain ways. Do you currently have anything like that set up where you decide what kind of hardware to reach out to?
Yes, that’s a good question, and in fact, we are launching something along this line this summer. Maybe to say more about where different hardware makes sense, a big divide right now in the quantum hardware world is between the superconducting machines, like the IBM hardware that you described, and these new trapped-ion machines. We take the point of view that it’s not necessarily going to be like a winner-take-all market. They have different strengths and weaknesses.
Most notably, superconducting quantum computers are very fast. They operate at megahertz, gigahertz speeds. That’s really important if the client demands very short turnaround times if it’s latency-sensitive, say, in finance. On the other hand, they have the disadvantage that error rates can be a little bit higher than, say, on trapped-ion machines, which companies like Honeywell and IonQ. These machines have advantages in terms of their qubits having very long lifetimes. They don’t degrade as fast as superconducting quantum bits. The catch is that they’re much slower.
So, that’s one dimension of many that informs our decision diagram as to whether to wrap an application to a particular type of hardware — let’s say, superconducting, or to trapped ion — and in fact, there are a number of other hardware types that are upcoming, like photonic and neutral atoms. They all have these different tradeoffs. We think that ultimately, the end users shouldn’t really care about making this decision as long as it gets the job done. That’s one thing that we’re aiming to benchmark in the next couple of months.
Yes, that’s a great point. We’ve been wrangling with that a little bit in the last couple of days. We’re in the process of setting up our own cybersecurity compliance type issuances because of exactly this. What we do anticipate is that at least in the pilot stages, to set real expectations, we’re not expecting any sort of quantum advantage for some of these applications this year, though I’m pretty optimistic about in the coming years.
I think in the interim, we’re able to work with simulated data sets, and we also envision that there can be client-hosted solutions that still run in the cloud from the quantum side, but you can really obfuscate the code in such a way that the end hardware provider really never learns about any sort of personally identifiable information. That seems like one path, and of course, if it works out very well for a client, they might consider buying the hardware outright. In the interim, we’re planning on working in the next year or two years with mostly anonymized data, aggregate data and simulated data to generally pilot and show that there could be an advantage with quantum technologies.
Yes, the whole quantum advantage and supremacy idea is still a hot button. I like to joke around at the company that if you want to draw the ire and vehemence of the entire community, just say that you’re close to achieving something like that. You’re going to have people looking at you in a microscope. I guess, in this case, it will be an electron scanning microscope.
How close do you feel we are to quantum advantage? I have some ideas, but you see a lot of these companies — they put out these extrapolations. They say that “when we have these many features to a binary classification, you could see that quantum computers will easily handle and scale up if you add a few qubits, but a classical computer will start to choke.” Have you guys done anything along those lines of extrapolation where you can hazard a guess when you feel that we’ll be seeing this advantage?
Of course, as you’re aware, this is the million-dollar question, and in fact, one of the quantum hardware companies, Rigetti, has literally put a million-dollar bounty on — if you run quantum advantage demonstration or hardware, we’ll give you a million dollars. In fact, that’s probably an underpayment, because it would be such a big milestone, but we have some views on this.
Knowing that this is also an educational podcast, I might take a side step here to note that one of the things that I’m really excited about that has happened in the development of quantum algorithms is the invention of something called a variational algorithm. In particular, it used to be that back in the ’90s, the quantum algorithms were envisioned where you run it once and you get your answer. Instead, there’s this new breed of algorithms that are more adaptive and variational in nature, so you don’t just run it once. You run it dozens of times — in fact, probably millions of times.
It gives a lot of error resilience, and If I can give an analogy here, I started thinking of it as, imagine your thermostat at home for heating and cooling is broken in such a way that it still works when you turn it right, it gets warmer, and when you turn it left, it gets colder, but the actual knob is mistuned. It says that it’s 80 degrees Fahrenheit, and it’s actually a comfortable 72. That’s something that I consider very annoying, but it’s not like the end of the world. You can still recalibrate your expectations so that if you want it to be very cold, then it ends up showing, say, 70, which would seem comfortable, but actually, it’s colder, and vice versa. So, that’s the kind of resilience that a variational algorithm gives: The quantum computer itself can be miscalibrated, but it doesn’t really matter at the end of the day, because as long as you can turn the knob, and left is left and right is right, you just rebalance to a new point.
Bringing us back to your question of quantum advantage, I think that with this variational algorithm paradigm, there’s now a hope for having quantum advantage. Before, we have, as people may have heard, these error-corrected qubits. So, in one path, where we need error correction, that’s going to take millions of qubits to get to quantum advantage, and that path will almost certainly happen, in my opinion, but I think it’s 10-plus years away. I do think that sometime in the next five years or so, if error rates keep on plummeting the way they are for current quantum systems, we can get away with not having perfect error-corrected quantum machines if we adopt these approaches, like these variational algorithms.
Obviously, predictions are hard to make, but I would venture, yes, that it’s very feasible in five years, with continuing hardware improvements and continuing code design improvements with the software, that there will be an inflection point where it’s more cost effective to run a problem on a quantum computer than on a classical computer.
Interesting. Of course, that will still be subject to the comfort level of the companies that want to take advantage of advantage. Right now, there are companies that do massive amounts of trading of things, and like I said, they won’t even touch the cloud. It will be that limitation of how we can get them to a place where they can take advantage of this quickly enough to have it be worthwhile and still trust sending it out into the world, so to speak. That’s something we’re trying to move them to.
That’s right. I was looking at this survey the other day. It was surveying ceos as to what kind of advantage they’d need to see for shifting to quantum. It is the case that a lot of companies won’t shift until that advantage, understandably, is like do or die — say, a hundred x or something, and that first demonstration of quantum advantage, it’s not going to be an eye-popping do-or-die situation. It’s going to be like, “Oh, it’s 3% better,” but of course, in an industry like finance or energy, that kind of margin could still matter in a lot of cases. I think it will be a gradual but accelerating pickup when it does happen.
Yes, and there are a lot of ways to game the system — to come up with workarounds of ways that you can get little pieces of advantage and, overall, make some kind of argument for still using quantum hardware for now. It’s been a while. It’s interesting because there have been papers published in this field since around, let’s say, 2015, where they’re trying to show that you can predict a benefit.
That’s always been an interesting idea, like some of the first papers published around annealing. They showed that in theory, we should be able to have a benefit, but they were still way far out. They were using just a handful of qubits and trying to extrapolate. I’m hopeful for things like trading execution or portfolio optimization — that these will come along sooner than later. What would you say is your favorite of the use cases to work on right now? I like to lump them into three big groups. I think of them as optimization, machine learning, and pricing and simulation for the financials, let’s say. Is there one that you find you’re particularly interested in — it’s your passion to see come to life first, something you’re secretly betting on?
Yes. I’d say those first two: optimization and machine learning. I like this categorization, by the way. This aligns with my worldview of quantum too. In particular, I view both of those as being selection problems in a way, unlike the simulation forecasting model, where it’s more about calculating risks and things like that. With selection problems, it’s things like, you have a basket of 50 stocks and you want to make a long/short decision on each of them. Classically, of course, if you want to brute-force that, that’s — two raised to 50 is a very, very large number. Two raised to 100 is astronomical. We don’t expect that quantum computers are going to give an exponential speedup in these cases because of this whole P-versus-NP fundamental question in computer science.
We did feel pretty optimistic that in these selection problems in the optimization sector — that is, given a basket of stocks, or given a basket of resources that we want to make some binary or discrete decision on, make the best one that you can within a reasonable amount of computation time. So, that’s an arena that I feel very excited about. It’s, of course, fairly broad in the applications and, most importantly, with this selection nature, that it’s natural to fit it into the language of qubits, where zero represents long, one represents short, or whatever for whichever industry we’re in. Of course, with quantum computing, we can explore the super position.
In the machine learning sector, I think not quite a selection problem, but there are these models called Boltzmann machines — and you can infer from the name that they sound very physics-oriented, anyway, but they were actually invented before quantum computers came to light. Now, there’s a lot of exciting research that suggests that even noisy quantum computers might be able to do a really good job of training these neural networks. So, those are two that I’m particularly passionate about. We’re actually working with both the government — the Department of Energy and the air force — as well as an initial customer that we’ll announce soon on development along both these lines. I’m also sure that Protiviti has been exploring interesting use cases in these spaces too, so, perhaps, sometime afterward, we’ll trade notes.
Yes, definitely. That brings up an interesting point with trading notes. You’re straddling, let’s say, it might be a financial customer, it might not be. I’m not sure. How do they tend to feel about the work they are putting in, being available to the community?
I still think of quantum computing as a scientific advancement. I still put on that scholarly cap and think of it as something that’s going to benefit the world, and we should be sharing it as much as possible, but the reality is, in the end, you get these companies that say, “If I come up with a great way to do something like credit scoring, I would hang on to that. That gives me an advantage to pick the bad prospects out and get rid of them. It might make my company do better.” So how are you finding that your customers are right now? What stage are they in in the altruistic thought processes here? Are they thinking along the lines of sharing it all, or do they very much want to keep it to themselves?
We, ourselves, and our customers, wrestle with this question as well. I think broadly, in the next two years, it’ll still be fairly quote-unquote precompetitive — that is, there is, as you point out, some altruism. It’s good to share results, and it’s good for advancing in the field. We can build off each other. That being said, our customers and ourselves actually have a selective desire to patent when there’s some sort of secret sauce that we still will publish but also want some ip around.
It’s naturally difficult to make an exact assessment of where that line is drawn, but in general, we, coming from an academic environment, are fairly pro open source, pro sharing with other researchers. We try to find customers, who at least want to release ideas, even if they still want to secure patent rights but still put out the papers, as opposed to purely doing a trade secret type of approach. That seems to be a happy medium in some regards, and in other ways, we’ll go completely open source and open on other ideas that are hard to capitalize on anyway. I do think in the next, say, five years, that balance will shift a little bit more toward being very closed. In some sense, machine learning has followed similar parallels, but I think there will always be room for precompetitive work through research — collaborations with the government research labs at the Department of Energy, at universities and so on.
I wonder if we’re going to see a drop-off in the more scholarly papers soon. Will all those guys get snatched up — guys and gals? Will they be pulled into projects, and, all of a sudden, it becomes very hush-hush? I don’t want to see it drive to the point where the only papers being published are rehashes of things from five years ago, and that’s where everything — open source is stuck, and it’s all secret because right now, they are all pulling from the same papers, right? If you think about it, all these ip applications are still pulling from the same maybe 60 papers that we’ve all read.
Yes, that’s right. Maybe if you comment on another dimension here that I anticipate changing in the next years: Right now, there’s not much, say, federal interference if we publish our research, but there is a broader fear about the geopolitical aspects of this. Other countries are oftentimes taking this research, implementing it, not sharing it back in some sense. I know that there is discussion about expert controls in quantum. There is both a fear, a very rational fear, and hope about what this will mean in terms of academic publication. There may be a clearance approach where, before publishing, there will be some sort of checks from the government. I think that’s still a couple of years, five years, down the road, but it’s something that could happen, especially in the relevance of quantum to international security.
It’s fascinating because anytime you want to try to give an analogy for where we are in quantum, you end up having to evoke different decades in one conversation. It’s like we’re in the supercomputing with the ‘60s, but in the crypto cypherpunk movement of the ’80s, and we’re in machine learning maybe five years ago. You could just keep having to bounce around from view to view. If we have all those things to build on, all those past experiences, are we going to make the same mistakes again, or will quantum come in with a new set of eyes? It’s hard to say. I would hate to squash the industry because of silly old thinking — outdated thinking.
Yes. We’re also in, like, the 1700s with respect to training of a quantum workforce, so that’s a priority of the government.
The 1700s — I like that.
Still a little behind.
Right. The Greek Antikythera mechanism.
Yes — just to throw in the late 1800s too. Yes, that’s fascinating stuff. We’re going to be doing a workshop where we’re going to try to pull together a bunch of financial customers, and we’re going to see how much they’re willing to share in an environment where they’ll say, “Hey, I would like to see this. I would like to see that.” A brainstorming session — we call it design thinking. It will be fascinating to see. How much will they share? Are they comfortable sharing? I hope to learn a lot about where their minds are in these things, because the more I talk to customers, the more I see that they consider things proprietary that sometimes you don’t even realize.
Yes. I can see where they’re coming from. It’s certainly something that took me a while to understand how sensitive especially customers in finance can be about these things. It is encouraging to see that they’re not complacent about the ramifications that quantum is going to have on their industry — and hats off to Protiviti, by the way, for organizing this workshop. I think I’ll be attending that one too, by the way.
Oh, cool. Great, yes. We hope to do more with other sectors too, like healthcare, maybe other industries, so it will be fun. Hopefully, a good lessons-learned day.
What do you think about surprises in the industry? I feel like we’ve already had some. If you had asked me a couple of years ago how I thought the next two years would go leading to this point, I feel like in some ways, we’ve exceeded my expectations. I didn’t expect there to be so many players in the quantum-as-a-service marketplace already. Such a push to make quantum just another piece of the app development puzzle. I just wonder, what do you think? Do you see anything coming soon that could shock us all? I mean, obviously, there’s topological quantum computing — the thing we’ve been waiting for forever. Will it happen? Won’t it happens? That could change everything overnight. But is there anything that you feel scratching at the back of your mind — might just change it all?
Yes — a good question. I’ll reminisce and say that I was giving a talk in, say, summer of 2019 about when quantum supremacy might happen to a bunch of mba students. I had said, “It might happen in the next 10 years or something.” Then, it actually happened a few months later. That was one surprise that I was like, “Wow.” Because of algorithmic innovations coupled with the existing hardware, we were able to get milestones much faster than expected. That was really an exciting point for me. It’s actually one of the impetuses where we decided to spin out our research to a company.
That’s retroactive, retrospective, but upcoming, one thing that I often wonder about is that the early adopters on hardware — Googles, Rigettis and IBMs, say — they have all modeled on what we called transmon superconducting qubit. It is a really good way to build a superconducting quantum system, but it was invented in 2007, and since then, better designs have come out that require more control electronics, they’re more expensive, but ultimately, when we’re trying to get error rates down, there’s this exotic qubit type called the zero-pi qubit. It’s starting to appear in some research.
It’s still not competitive with what, say, IBM has — a 50-qubit scale — but I’m looking forward to surprises in terms of what new hardware could come out that totally shocks everyone. It’s not, obviously, the topological qubit. The Holy Grail could come out one day, but maybe more intermediate outcomes, where there are better ways of building superconducting qubits that are not on the current path of the big companies like IBM, Rigetti, etc., Google, but are in labs right now and really have a lot of potential.
To give a short answer, I’m excited about hardware that people are not really paying attention to at the scale of big companies, but research labs are really starting to make breakouts, so that’s what I’m super excited about, surprise-wise.
Yes, that’s great. This idea of machines scaling up — a new technology can make that much easier, obviously, if you get lower error rates, etc., but every once in a while, a company will make some claims, and it takes a while to validate them. Whenever some of these surprises come up, you always wonder, will they follow through? Will they map it out? Ibm made some pretty impressive claims recently. Their road map, they talked about software and hardware, but within a couple of years, we’re looking at over a thousand qubits and the way they’re going to have their Qiskit environment working — hopefully, 100 times increase in performance they claim — in the applications. Do you have any reason to doubt their progress, or how is that looking to you?
Yes, I definitely believe that ibm will put 1,000 qubits on a device. I think the question that really matters is, will they be 1,000 useful qubits? In fact, it’s never been a challenge deeply to just add a lot of qubits to a system, but the challenge is adding qubits without degrading quality. I don’t doubt that they could reach 1,000 qubits. I think it will be an immense but attainable and extreme challenge to do it while maintaining and, in fact, lowering the error rates. That will be pretty exciting.
I think that they have a good road map, technically, for doing it. We’ve been collaborating with some of their engineers around the pulse-level control of their quantum chips, as well as their algorithmic innovations in Qiskit. I’m pretty optimistic about the IBM road map. In the past, they’ve actually done better than their projections. I think they were forecasting doubling of quantum volume every year, and in the last one year, they did two doublings. So, it’s still not that order of magnitude that I hoped for, but better than what they promised. I’m pretty on board with the ibm road map. It seems realistic. I’m very excited to see what happens in the trapped-ion world in parallel.
IonQ, one of the companies that’s been out there, they have, allegedly, a quantum volume of two raised to 32, or something very, very large, so I’m excited to see how that will pan out. They haven’t shown an official demonstration, which I think is why it’s really important for other people to develop benchmarks. In fact, stay tuned — we have a product launch around this in a couple of months. But really, an honest way to compare different hardware platforms, assess whether the company’s claims in the PR release are really manifesting in terms of end-user applications.
A benchmark is really exciting because for a while, I’ve been thinking that quantum volume is probably dead. It just feels like with a little bit of tweaking, you can get a six-qubit machine that claims a quantum volume of four million. To me, that just doesn’t feel right. I’m not sure that we’re measuring it correctly — at the very least, in a way that’s not confusing to people in the industry. Some kind of benchmarking that shows how everything is taken into account would be very welcome, in my opinion.
We’re going to aim for application-centric, because quantum volume is one way of building a circuit where it’s very square in shape — that is, the runtime of the circuit is equal to the number of qubits — but practical applications are going to be so much more heterogeneous options. There will be some that have a lot of qubits and short run times, and vice versa. We’re going to try to be much more application-centric, and I think there are similar efforts out there amongst other companies so that hopefully, by the end of this year, I think we’ll be able to rely more on third parties to evaluate claims and hardware, as opposed to the press releases from the big players every year.
That’s really great. It’s a bit of a surprise. I didn’t know you guys were working on that. I guess we could start wrapping up with that very optimistic note. Is there anything you’d like to let our listeners know about before we close? Anything coming up maybe sooner than the benchmark?
Absolutely, yes. The Chicago Quantum Exchange has been such a priceless partner here for my company, definitely.
I should also add P33. You and I are both part of the Quantum Fellows of this organization to bring more quantum industry in Chicago.
Yes. I spent a good amount of time with those folks every week — virtually, of course, these days. If anyone wants to learn more about your company, Super.tech, that’s the easiest URL you’ll ever have to remember. Thanks again, Pranav, for coming on. This was great. I really appreciate it.
Great. Nice to catch up, Konstantinos.
That does it for this episode. Thanks to Pranav Gokhale for joining today to discuss Super.tech, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on Twitter and Instagram at KonstantHacker — that’s “konstant with a K, hacker.” You’ll find links there to what we’re doing in quantum computing services at Protiviti. You can also find information on our quantum services at www.protiviti.com, or follow us on Twitter and LinkedIn. Until next time, be kind, and stay quantum curious.