# Transcript | Quantum-Inspired Approaches for Advantage Today with Multiverse Computing

As NISQ-era quantum computing improves, we’re on the cusp of practical advantage within a couple of years. But some companies want to see impressive performance today. That’s where quantum-inspired solutions can provide up to triple the power, all on classical hardware. Learning quantum-inspired programming could even help coders migrate to real quantum computers in the future. Could these approaches even improve ChatGPT and stable diffusion AI such as Midjourney or DALL-E? Join Host Konstantinos Karagiannis for a quantum, inspiring chat with Roman Orus from Multiverse Computing.

**Konstantinos**

We’re on the cusp of practical quantum advantage within a year or two, but some companies want to see impressive performance today. That’s where quantum-inspired solutions can provide up to triple the power, all on classical hardware. Could these approaches even improve ChatGPT and stable-diffusion AI? It’s a quantum-inspiring chat in this episode of *The Post-Quantum World*. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

Our guest today is the cofounder of and chief scientific officer at Multiverse Computing, Roman Orus. Thanks for being on the show.

**Roman**

Thanks.

**Konstantinos**

Could you tell our listeners about what got you interested in quantum, and how you found your way into this world?

**Roman**

Yes. It goes back many years, actually. I was interested in quantum many years ago when I was studying my physics degree. I’m originally from Barcelona, and this was around 20–25 years ago, more or less, when I was doing my undergraduate. I was a student at the University of Barcelona, and you have this subject, quantum mechanics, and it’s one of the hardest things you had to study in my undergraduate, and I had a fantastic professor actually teaching the lectures. He was fantastic. He was Rolf Tarrach.

The guy was so passionate about quantum mechanics that it got me hooked. I got extremely interested. That’s why I decided, “This is what I want to do.” When I finished the undergraduate, I went for a Ph.D., and I decided I had two or three options, but it was pretty clear to me that I wanted to do a Ph.D. on quantum information and quantum computing. That’s how everything started. That’s how my love for quantum mechanics actually started.

**Konstantinos**

Yes. So you had a really good professor. It’s funny. Sometimes, in physics courses or degrees, they start you out with things that are so not interesting. Sometimes it takes the right person. Every class should start with quantum, frankly.

**Roman**

It’s always the professor. The professor makes all the difference.

**Konstantinos**

Yes. I’m glad you got here. Of course, Multiverse is no stranger to the show. You’ve come on before. We’ve had guests come talk about some work we’ve done together.

Today, I wanted to bring the company back because this year, we have a chance at making a big splash in the world with this idea of quantum-inspired. I feel like as companies are looking for ways to take advantage of quantum techniques, but they feel that maybe the machines aren’t there yet, this is a good hybrid approach. Can you share about what quantum-inspired is?

**Roman**

Broadly speaking, these are classical algorithms. They run on classical machines. They run on your laptop. They run on CPUs. They run on GPUs, on high-performance computers and on clusters. These are algorithms and methods that are new. They are called quantum-inspired because, essentially, they are inspired by how a quantum computer processes information.

This is an interesting by-product or a spinoff of quantum computing. All these years of research and development in quantum algorithms and quantum computation has given us, of course, very nice quantum algorithms as a by-product. It also improves classical algorithms. These are precisely quantum-inspired algorithms. These are methods that are classical, but they are new, and they are precisely inspired by how a quantum computer is processing the information. We can mimic somehow on a classical computer, and it turns out that this way of mimicking algorithms, in many cases, outperforms everything that we have done so far with a classical computer, or they can even improve the standard or the current algorithm that we have on classical computers.

There are different families here of algorithms, obviously. Some people call these algorithms physics-inspired or quantum-inspired. That’s also a valid terminology. People have developed different things here — for instance, digital annealing. This is called quantum-inspired. Microsoft also has something called the quantum-inspired optimisation toolbox, a set of classic algorithms for optimisation.

Then, there are also tensor-network algorithms. Tensor networks are something that is not new in physics. These are methods, and these are algorithms. We call them quantum-inspired, but this is something that has been there already for a while. They come from the study of quantum complex systems: What happens, when you have a quantum system that, instead of just having one particle, you have many particles. Imagine you have a solid where you have 23 atoms in a crystal, and you want to understand, how are the quantum correlations inside that?

It turns out that the correct mathematical language to describe those quantum states is something called tensor networks. That has been there for a while. People have realised that with these mathematical structures, we develop and implement numerical algorithms for simulating matter and materials. Then, recently, people realise, these mathematical structures, they show up not only when you want to describe a quantum complex system, they also show up when you want to describe systems in finance or in manufacturing and so on because you end up seeing loads of variables that are super correlated with each other, and it’s the same type of mathematical structure.

**Konstantinos**

There are different types of quantum-inspired solutions, obviously. Are any of them conducive to building on someone’s training to then be ready to program a quantum computer in the future? Would any of these be better stepping stones? You can make the argument that if someone has experience with classical machine learning, they might do better with quantum machine learning once they learn what the differences are — let’s say, the difference between SVM and QSVM.

So would you say that any of the quantum-inspired approaches would prepare someone who is studying coding to be a better quantum coder in the future?

**Roman**

Yes. Tensor networks, for instance, it’s something that will better prepare you to be a better quantum coder in the future. With tensor networks, you can do lots of things: You can improve, let’s say, standard classical machine learning, but you can also mimic the behavior of a quantum computer. You can simulate quantum computers using tensor networks, for instance. The terminology, the jargon, the type of operations that you implement and so on, they are equivalent, essentially. The unit is the language of quantum information to be able to understand what is going on in tensor networks. That’s exactly what you need to power a quantum computer. It’s like two faces of the same coin. One is classical, the other is quantum, but the language is pretty much the same.

**Konstantinos**

There’s this constant interplay between tensor and quantum. I’ve even seen simulators running on TPUs that were more efficient than when they’re running on CPUs to simulate quantum computers.

Let’s look at some of the use cases — what you would actually do with quantum-inspired. I’m sure our listeners are familiar with the idea of portfolio optimisation. In fact, the first time Multiverse came on, it was to talk about portfolio optimisation. How would you use quantum-inspired? I know you’ve gotten some good results with dynamic portfolio. Can you talk about that use case so people can understand the difference?

**Roman**

With quantum-inspired methods, you can do many things, in particular, with tensor networks. In the context of the complex quantum systems, people have been using tensor networks for many years to solve optimisation problems in physics. Those were typical problems where you have to compute the lowest energy state of a material or, for computing an interesting phase of matter — let’s say, a topological state or something like that, or of a molecule. There are also people doing quantum chemistry who have been doing calculations with tensor networks.

Now, those are nothing but optimisation problems. In the industry, we also have optimisation problems that we made. With portfolio optimisation, what we need to do is apply some of these methods that people have been using over the years in physics and specifically adapt it to this problem. Several years ago, that’s the first thing we did with BBVA. We did a project on their portfolio optimisation with quantum and quantum-inspired methods. It was dynamic portfolio optimisation.

This is a problem in finance where you have a series of facets and you want to know at every step in time how you have to invest and uninvest in all these assets depending on how the prices are fluctuating. At the end of a period of time, you maximise the returns and maximise the rates. Say if you have a pension fund and you are investing in the stock market, you want to maximise the returns when you retire. So this is the type of program we are talking about here.

In terms of this program in finance, this is the economical optimisation problem. It’s a textbook problem. It’s actually intractable. Mathematically speaking, this is one of those problems that people say is very difficult to solve officially with a classic computer. All the algorithms that we have, either they fall into local minimal solutions or they are suboptimal. It takes a lot of time to find very good solutions. Algorithms that we have on classical computers actually struggle a lot.

Now, we had this problem using quantum and quantum-inspired methods together with BBBA. They provided real data directly from Bloomberg over a period of one year. There was a lot of data there. We solved it using tensor networks and using quantum-inspired methods. We erupted an optimisation algorithm using tensor networks to solve case-specific problems. There are many things one could do there, but we just chose one of the simplest things that we could do, and it worked and provided very good results.

As of today, it is working so well that part of this is already implemented into what is a product of Multiverse called Singularity. There is a part of it that’s called Singularity Portfolio Optimisation, and this is a suite that includes portfolio optimisation algorithms and, in particular for portfolio optimisation, we have fine-tuned the algorithms, and this includes quantum annealing, but also quantum-inspired methods and, in particular, tensor networks. This is a very good solution. We are beating some of the standard classical tools for this problem — some of the most common ones.

**Konstantinos**

I saw the numbers. It looked like a 50% profit increase and a 33% volatility reduction, which, at a glance, sounds staggering. That’s a noticeable difference. The chart becomes something that makes your eyes pop. Have you directly modified this for a different kind of use case?

**Roman**

Yes. The good thing about having an optimisation algorithm is that optimisation is a universal problem. The use case is the upper layer of what is going on. The engine, the core of what is happening is the optimisation algorithm. Once you have a very well-developed and fine-tuned optimisation algorithm, you can apply it to all the problems you want. You can apply it to problems in finance, portfolio optimisation, but then you can translate it into whatever industry you want.

For instance, we have also been applying quantum-inspired optimisation methods to things such as the optimisation of energy markets. That’s also something that we have been doing in the context of energy with some energy producers here in Spain. That’s also one of those computational intractable problems. The energy market is extremely complicated in that people are not just consumers of energy. They are also producers with electric batteries and solar panels on my roof. They can sell energy to your market and buy at a different price. When you have to coordinate all of this for a city of two or five million people, this is a huge mess.

It turns out we did it with quantum-inspired optimisation, and this is a good example of an optimisation program that we have been solving with quantum-inspired tensor networks. Apart from that, we have been using it for other things. We have been using it as a basis for quantum-inspired machine learning in different contexts of artificial intelligence. I could go on. We have also examples in manufacturing.

**Konstantinos**

Of course. Like with the energy example, you’re dealing in big numbers. There’s obviously a lot of potential households or whatever that are using this energy. In any optimisation, 1% can equal a million bucks. It’s a huge difference, so it’s important to note that. So sticking to financial for a little bit, I know another popular use is option pricing. Can you talk about what that is too?

**Roman**

That’s a different use case where we were dealing with quantum-inspired methods. Option pricing is the problem of pricing complicated financial objects. There are some complicated financial objects called derivatives, which are complicated contracts depending on whether to buy or sell something in a given period of time. It’s a complicated financial thing, but people use them for investing. There is a huge problem in finance, which is how to give a price to these contracts. There are even Nobel Prizes in economics for this.

The way in which people nowadays are solving these problems is, with classical computers, using huge Monte Carlo simulations. They are dealing with lots of problems of distribution, stochastic variables and so on. There is now a trend to use deep learning, techniques based on artificial intelligence where you have a particular type of structure that is learning how to perform a particular task. When it comes to option pricing, there are different deep-learning algorithms where you can use neural networks to solve the problem to a neutral price.

Now, it turns out that came to us and asked us, “What do you think about this problem? Do you think we can solve it?” We started discussing with them. At the end, we did a project with them on derivative pricing in finance using quantum-inspired methods.

Now, internally at the bank, they had a deep-learning strategy for solving this that was very efficient. We improved it by using tensor networks, which are quantum-inspired. We developed a new type of classical algorithm inspired by quantum that was mixing neural networks with tensor networks. We were able to assess the inefficiencies of neural networks and improve them using tensor networks. This turned out to be a huge improvement. We could improve the time in the calculations by a large factor. We could improve the precision, the ability of the calculation and so on to the point that the guys from were so happy with this that they just decided that they would implement this into production. This worked out very well.

This is an example of a different type of quantum-inspired algorithm that is not optimisation. It turns out that this is an machine learning algorithm with artificial intelligence. You know, there is all this mess about neural networks, deep learning, and generative AI and all of this. If you heard about ChatGPT nowadays is.

All these algorithms, they can be improved in precision and performance with quantum-inspired. This is my way of seeing this: Artificial intelligence, or at least neural networks, deep learning and so on, these are algorithms that were invented in the ’40s. People were not running these algorithms in the ’40s, because they didn’t have the machines that you need to run them. We have the machines now in 2020, 80 years later, but the algorithms are from the ’40s. So, in 80 years, we have come up with lots of developments that we should be implementing, and that’s exactly what we are doing now.

**Konstantinos**

When you say faster, it’s a lot faster. One example: Classical deep learning took 1,950 steps; the quantum-inspired approach took 660. That’s a 3x improvement in speed. That’s important to point out — and efficiency went up too. The number of parameters went from 1,050 to 250. That’s crazy.

**Roman**

This is in memory. This one is important, the one in memory, because people talk about time. Memory is very important because, the less memory you consume, the less energy you are consuming. You have an electricity bill that you have to pay. Deep-learning algorithms, they are very energy-consuming, and they are not green. What we are talking about here is that quantum-inspired technologies are also green technology.

It’s also important that they reduce the memory because in many applications, perhaps not in pricing, but in machines that you want to deploy an artificial intelligence system on a satellite, you have to deploy directly on the satellite itself. In that case, it’s critical that the algorithm that you deploy consumes the least possible resources. If you have an algorithm that is consuming 10 times less memory, it’s much better if you want to deploy directly on-premises.

**Konstantinos**

Going back to ChatGPT for a second, how would you improve that using tensor? How would this help?

**Roman**

I would ask ChatGPT how to do it. I would have to see. ChatGPT is based on a particular type of neural networking. I am not an expert on what is a ChatGPT technology, but it’s a particular type of language that is built on something called transformers. Transformers are a particular type of deep-learning solution that is particularly well-adapted to language. So in the same sense that convolutional neural networks work very well for images that can also improve on tensor networks, transformers, they are well-adapted to language.

Now, it turns out that you can also improve transformers with tensor networks. There are proposals out there. People have been publishing papers already, and I’m pretty convinced that it’s possible to improve the memory and performance and so on in the same way that you can just improve standard neural networks. In my opinion, it will be possible to improve whatever algorithm they are using for ChatGPT, which will be possible to improve using quantum-inspired.

**Konstantinos**

Do you think the same holds true for stable diffusion for the image-generator AIs that are so popular?

**Roman**

Yes.

**Konstantinos**

That’s pretty exciting stuff. Look at that — an extra reason for people to start learning about quantum-inspired.

**Roman**

As of today, I haven’t seen a single example of an AI or machine learning algorithm that could not be improved somehow with quantum-inspired.

**Konstantinos**

That’s a huge statement.

**Roman**

There is a reason for that, which is that these quantum-inspired tensor networks, they have a history. People discovered them in the context of complex quantum memory systems, but people have been rediscovering them every 10 years in different contexts — in mathematics, gravity also, they were discovered, and so on. The mathematical formalism is very clear, and you understand very well what is going on. Now, that’s exactly the problem that one faces in machine learning algorithms — neural networks work, but you don’t know why they work. You don’t know how a deep-learning neural network is learning at all. This is the problem of explainability: You cannot explain what the hell is happening.

In this sense, quantum-inspired tensor networks are offering not all of the solution, but at least part of the solution, because many of the things that are going on with these methods, they are formally justified in terms of and in terms of the language of quantum information. There are things that are [?] information and so on, entropy. We have, over decades, produced theorems about this. We know exactly what happens, and this is the reason why we can improve [AI].

**Konstantinos**

You applied this idea to credit scoring. You were able to get the same number of credit downgrades, but with better precision and fewer false positives. How does explainability help there? I know this is a big industry concern.

**Roman**

Explainability is a big problem. There are different types of machine learning algorithms. If you use things such as neural networks or deep learning, these heuristic algorithms, where you essentially train a structure and the structure learns, then you feed it with some input and it produces some output. You don’t know exactly why, but this is what it is. Now, that is the problem of explainability. There are other algorithms that are explainable also in classical machine learning. However, they may or may not be as good as the neural networks.

The problem with explainability is that we need more mathematical formalism and a more mathematical theory of what is going on with deep learning and with machine learning, in particular with neural networks — how and why a neural network is learning. How is information somehow flowing, collapsing and structuring in a way it’s getting a structure in the neural network?

This is something that is mathematically very complicated. It turns out that that’s exactly the problem we have been dealing with in physics for many years when we wanted to study quantum complex systems. Why do quantum complex systems of many particles behave the way they behave? Because they have a structure and correlations. How can we quantify?

Now, it turns out that people devise loads of mathematical techniques for this. There are things like entropy, mutual information, discord. There are lots of things there and lots of theorems that people have now been putting into place. In this sense, adding this language into machine learning is probably extremely useful because that that’s exactly what we need to improve, at least in some directions, when it comes to explainability. This is exactly the mathematical language that we need to explain how information is getting structure inside of these machine learning algorithms, which is going to tell us why a network is learning and why it’s giving the answers that it’s giving. In this sense, adding tensor networks to machine learning is almost necessary. It’s exactly the language that we need to make them explainable.

**Konstantinos**

Yes, and that becomes a legal concern. With certain decisions, you have to be able to show how you got it, or else you’re in hot water.

**Roman**

Yes — that’s extremely important. We have noticed it in many projects. Say, for instance, if you go to a bank, the solutions need to be explainable. It’s not acceptable that you, as an employee, produce a prediction and then your boss comes and asks you, “Why should I be making this decision and not another one?” and you say, “I don’t know, because I have a black box machine here that is telling me so.” That is not acceptable. I have to give reasons to my investors. This is a huge problem, explainability.

**Konstantinos**

That’s exciting, because I know that’s what those institutions want to hear — that they can get explainability.

Before we wind down, if people want to start playing around with this stuff on their own — one good way, I noticed, is that in Azure Quantum, for example, there are ways to learn this. There’s QIO, quantum-inspired optimisation. What types of simple projects do you think could be done using even something as simple as QIO in Microsoft, let’s say?

**Roman**

There are lots of places to start learning about this. Tensor networks and quantum-inspired methods, they have a history. If you want to learn about Microsoft quantum-inspired optimisation, they have tutorials. If you want to learn about digital annealing from Fujitsu, they also have tutorials. If you want to learn about tensor networks, there is lots of literature. There are even books written about it, and lots of reviews.

A good place to start with, there are different websites around that one can play with. There is, for instance, the website of [Unintelligible] , which has lots of very useful resources. A website of [Unintelligible], who is a researcher in Japan — he also has lots of useful resources there. Then, I would start playing with some of the basic optimisation [?]. For instance, there is a very famous optimisation algorithm that is called DMRG. This is very basic. It’s very well-known in physics.

To the people coming from machine learning, it may look like various things, but it turns out this algorithm was proposed in 1992 and has more than 30 years already, and it’s up and running, and it’s still one of the best things that we can do nowadays to optimise cost functions. I would start with playing with these. With some basic knowledge of computer programming and a basic knowledge of what’s happening to our cost function, anybody should be able to program these optimisation algorithms. From there, it should be possible to move up.

**Konstantinos**

That’s helpful. I always like to give people a little nudge in the right direction there.

Thanks for coming on to explain this topic. It’s going to have a big impact over the next couple of years as we’re waiting for quantum computers to catch up. Hopefully, we’ll be having you guys back on with some future success.

**Roman**

Yes, I hope so.

**Konstantinos**

Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

Quantum-inspired computing describes what are technically classical techniques that run on classical hardware, but they’re inspired by how quantum computers process information. There are different classes of algorithms and approaches that fall under the general category. Digital annealing is one with machines by Fujitsu leading the pack. Quantum-inspired optimisers, or QIO, are present in Microsoft’s Azure Quantum. Tensor networks also fall under the category and have been around for a while. They excel at constrained combinatorial optimisation problems. Learning these techniques now will prepare coders to be better at programing real quantum computers in the future. This gives the benefit of good performance at real business problems now while preparing the workforce for handling the machines of tomorrow.

How good is the performance from quantum-inspired algorithms and approaches today? Multiverse has excellent benchmark examples of recent successes. When optimising dynamic portfolios, Multiverse achieved a 50% profit increase and a 33% volatility reduction. At option pricing, they achieved three times the speed and three times the efficiency. To give another financial use case, quantum-inspired achieved advantageous results in credit scoring. It caught the same number of credit downgrades, or entities whose solvency is at risk, but with better precision and fewer false positives than a standard classical approach.

It did this with explainability, which is a challenge for some classical methods and a requirement in some industries. If you call someone a bad credit risk, for example, you have to say why.

Roman believes that quantum-inspired techniques can even be used to improve popular AI technologies such as ChatGPT and stable diffusion like MidJourey or DALL-E. Some people are already considering these tools to be uncomfortably intelligent, but it would be exciting to see how quantum-inspired helps in the future. I, for one, welcome our quantum-inspired overlords.

That does it for this episode. Thanks to Roman Orus for joining us to discuss quantum-inspired approaches, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.