Finding a specific use case that proves quantum advantage will radically kickstart the quantum computing industry. While universal gate-based quantum computers with error correction are a few years away, annealers - quantum systems from D-Wave - are proving excellent at specific types of problems today, especially in a hybrid approach aided by classical computers. Just how excellent are these systems? We might finally have an example of portfolio optimization that provides a thousandfold boost in speed. Is 2021 the year for advantage?
Guest Speaker - Sam Mugel, CTO, Multiverse Computing
The Post-Quantum World on Apple Podcasts.
Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organizations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.
We’ve talked about quantum advantage in past episodes — as an idea. Today, we’re going to dive a little deeper to a recent experiment that provided a thousandfold boost in speed. This isn’t a theoretical problem; it’s real portfolio optimization with a real edge provided by D-Wave’s hybrid classical quantum system. Is 2021 the year for advantage? Find out in this episode of The Post-Quantum World.
I’m your host, Konstantinos Karagiannis. I lead quantum computing services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.
Our guest today is CTO of a company called Multiverse, which is a great name if you’re into the many-worlds interpretation of quantum mechanics, like I am. He’s working on something pretty cool — basically, bringing quantum computing use cases to finance, and that’s near and dear to my heart, so I’d like to welcome Sam Mugel.
We are a startup — a fintech startup — and we’re using quantum computing for Fortune 500 companies that want to gain an edge through deep technology. We’re using an awful lot of techniques from quantum computing, from deep physics, and applying these to real industry-relevant problems. I’d be thrilled to tell you a little bit more about that.
Yes, that’s correct. Four years ago, Multiverse started out as a WhatsApp group. We weren’t called Multiverse at the time; we were called Quantum for Quants, and we had all these contacts over in Europe. They were asking us, “What is quantum computing?” So, we set out with this company to answer exactly this question and publish a few scientific articles. In one of these, we dove deep and looked at every single quantum computing algorithm and tried to say how can this be relevant for finance, where can it bring an advantage, and by the time we’ve done this homework, we’d identified the real low-hanging fruit that could be developed today with the current hardware. Basically, we were just like, why is nobody doing this? So, we incorporated as a company — that would have been almost exactly two years ago. We were approached by the Creative Destruction Lab, and so now, we’ve got a foot in Toronto as well, and we’ve been developing killer quantum applications ever since.
So, you guys are working in optimization and quantum machine learning?
That’s right. Quantum optimization was really the low-hanging fruit — first use case is the finance that we identified. I’ll be telling you a little bit more about that afterward. Quantum machine learning is a much more ambitious project we’re starting to dig our teeth into at the moment, and there is some interesting stuff that you can do over there in terms of like fraud detection. There are lots of open problems in finance and in terms of things like credit scoring. There are lots of billion- or trillion-dollar verticals over there. Another one would be life insurance and estimating how to price these things. There’s a lot of really interesting stuff that can be done.
I hope you’re right.
I wanted to have you on like “I knew him when.” Did you want to talk a little bit about like why computing can be accelerated by quantum, and why quantum computing is relevant to finance in particular?
The way that I like to explain this is that in regular computing, you have your bits that can be zero or one, and then you apply all these binary operations to them, and this allows you to perform computations, essentially. In quantum computing, your smallest unit of information, your quantum bit, can be zero and one at the same time. This is analogous to how Schrodinger’s cat, for instance, can be dead and alive simultaneously. This is really partly quantum awareness — quantum properties of our world.
The interesting thing for computing is that when you apply an operation to your quantum bit, you’re simultaneously applying it to both states. So, if I apply an operation — I knot operation to my quantum bit, for instance — I’ll be performing the knot on both states simultaneously. So, you can already see that quantum computing gives you, naturally, access to parallel computing, whereas in classical computing, you have to have dedicated hardware to do this. The other interesting thing is that the amount of states that are available double every time I add a quantum bit. So, by the time I have 330 quantum bits, I can perform more operations simultaneously than there are atoms in the visible universe. So, this immediately tells you that quantum computing gives you access to computational resources that would be completely impossible classically.
There you go.
Absolutely. There’s a real problem in finance and data science in general: We’re a lot better at gathering and storing information than we are at processing it. We’re gathering this exponential amount of information, but the amount of computational resources we have at our disposal is starting to stagnate. So, that’s a real problem for huge banks, for instance — they turn around, and say, “Hey, we’ve got all this data now. How can we get value out of it?” There’s a real hope that quantum computing will help these big institutions address that problem.
Yes, and in the short term now — in this near term — it seems to be that we have a real shot at doing it in a significantly better way using annealing, right?
Yes. Quantum annealing is a fantastic analog computing tool for optimization problems. It’s one of the first examples of quantum computing that we’ve had. Essentially, you wouldn’t think of it as a general-purpose computer. It’s really a computer that natively solves optimization problems, which are everywhere in finance.
Yes, that’s an important point. You guys applied the D-Wave annealer to something that is really what led to this episode. I definitely wanted to get to that. You did a BBVA case study, or a use case?
That’s right. We were working with their capital markets team on portfolio optimization applications. BBVA, which we were working with, is the world’s 47th-biggest bank — quite a major entity. We were really thrilled for this opportunity to work with them, and we decided on portfolio optimization because it’s a very, very difficult problem for classical computing — it grows exponentially with the number of assets or the number of time states that you might consider. Also, it’s a nice problem because it’s applicable in many areas in finance: You can apply it to back testing, you can apply it to index tracking, you can apply it to stress-testing. There are lots of really exciting, high-value applications there.
The problem that we solved with BBVA was, given forecasts about market data, what would be the optimal portfolio, taking into account transaction costs? Essentially, we’d have to determine the optimal investment trajectory, taking into account transaction cost and taking into account risk of the portfolio. This was a staggeringly difficult computing problem for a classical computer.
Yes, and what blew my mind was, normally, when you’re doing a POC, you start small — you get a small set of data to prove that you can do it — and then you try to extrapolate. You guys decided, “Nah, we’re going to get the biggest dataset on planet Earth and go that way instead.” You guys used the XXL datasets. Did you want to talk a little bit about the implications of that dataset size?
BBVA really dropped us in the deep end. The biggest dataset we considered, though, the XXL dataset, was four years of data with monthly rebalancing — an awful lot of potential trajectories — and I believe we were initially given 54 assets. We used some really sophisticated clustering methods to determine the eight most representative assets. The final portfolio we considered was eight assets. We had some constraints on how much we would be able to invest in each asset, and then over four years of data. I believe this came down to 10400 possible solutions — way more than the atoms in the visible universe.
Yes, way, way, way more. It’s 1090, or something like that.
So, that started to show the limitations of classical, because you guys were comparing the same approach to using TensorFlow?
We had several methods. One of them was a standard library optimization for Gecko. We did not consider TensorFlow, but we did look at Tensor Networks methods that are some in-house algorithms that we developed. We also ran on the side something like hybrid quantum computations on the IBM chip.
That’s right, yes — I remember.
And here, D-Wave had a little bit of a leg up because it’s the problem that’s natively solved by that particular GPU. What we found was that rapidly, we got into problem sizes that even Gecko standard libraries or some exhaustive solvers that we coded ourselves and also the IBM chip could not even begin to consider. The only two solvers that were actually able to solve this XXL dataset were our Tensor Networks applications and the D-Wave solver. The interesting thing is that while our Tensor Networks tends to converge to a slightly better solution, this would generally take a full day of having the computer running to get results from this, where it took the D-Wave quantum computer 170 seconds.
Yes, that was the moment for me. I was like, “Whoa, wait a minute. So, we’re taking a practical-sized data, we’re doing it in 170 seconds instead of a day,” and the accuracy wasn’t really that far off.
It wasn’t that far. We were looking at a metric with the Sharpe ratio, which is a ratio of returns to risk, and a Sharpe ratio of eight is generally considered to be risk-free — unheard-of investment. I think we were getting Sharpe ratios of 12 with D-Wave, and up to 14 or 16 with Tensor Networks — staggering performance.
I’ll tell you a couple of words on them. These are classical algorithms, but they’re borrowing ideas from quantum information theory. The idea here is that we’re going to borrow ideas that tell you where the information content is in the problem. This can really boost the performance of classical computing algorithms. There’s lots of interesting work in applying this to machine learning — we’re doing part of that work. Another big player in this domain is Google.
In this particular use case, we were using it to try to figure out which parts of the solution space we should be focusing on. The analogy here is, imagine you want to find the lowest point in a chain of mountains, and then you’ve got a smart method in this case, Tensor Networks, to tell you where all the valleys are, and then you don’t have to bother about exploring all the peaks. You can just bother exploring all the valleys.
Interesting. So, is it strategic to still develop Tensor Network algorithms?
Oh, absolutely. For us, our clients don’t really care about how we get to a solution. What they want to know is that we’ve given them the best solution the technology can provide. When you’re in early-stage quantum computing like this, it’s nice to have a method for pushing classical computing to its absolute limit. Additionally, it also tells you what classical computing can do well. So, then we can isolate that, not get bottleneck problems, and then shoot all that off to the quantum computer and then solve everything we can classically use in Tensor Networks.
It’s that hybrid approach — doing it in pieces — that’s going to give us our first real benefit to using quantum in any way, shape, or form. That’s how we’re going to get our short-term gain. Would you agree?
You’re absolutely correct. Being able to use classical computing is good for what you can use it for, and then solving the really hard stuff on quantum computers absolutely is the key to advantage. Another key to it would be to use analog platforms. I see a lot of promise in these short-term. So, things like D-Wave, Xanadu, Pasqal — these are all quantum computing startups that are developing analog quantum computers, and the logic here is, these machines solve the problem natively, whereas machines that tend to be universal are approximating gates and qubits, and all this adds errors and reduces coherence.
In the short term, it’ll be the time for those machines to shine. This will be the glory years for two years or so. Then everyone is going to be distracted when we’re starting to see advantage in universal gate-based machines.
I totally agree. Short-term, we’re going to see some impressive performance from analog machines and quantum annealers. Long-term, when we have a universal-error-corrected, gate-model computer, then they’re going to be able to do everything that the analog machines do, and probably, people will leave aside the analog machines, but I don’t think this will be before well into that 2030s. There’s probably some exciting results to be had already now from analog machines.
Yes, it’s time to use our D-Wave monthly credits — time to cash them in.
I couldn’t agree more.
I want to talk about another study you guys did, but would you say that the BBVA one has a good extrapolation somewhere in it of when we will have advantage? Do you see something you can tweak to get us even closer, like better accuracy, even faster — something that could really throw a stake in the ground?
I agree. It’s the practicality. Who cares if we can make up something, run it, and say, “Look, it’s better,” like what they did in China with that photonic system. It was a completely useless problem, but they proved that you can do it.
Exactly. This is the typical thing of, oh, yes, a quantum computer is just so much better simulating being a quantum computer.
But this way, we have something usable, and the other distinction is, once we achieve something that seems like advantage, the quantum computers are going to get just a little better, and as we’ve already discussed, a little better means a lot better. Add a few qubits, and then, whoa, it’s a lot better at doing it. Once it takes off, it just keeps taking off.
So, there won’t be neck-and-neck.
Classical computing got to where it is because there are many trillions that have been invested in the silicon industry. Once quantum computing can show that it can catch up to that, we’re going to start seeing similar investments in the quantum computing industry as well.
Once it goes a little way, it’s going to go fast.
It’s just a simple comparison of lines on a graph. Moore’s Law looks one way. Quantum is going to look very different. It’s not going to be neck-and-neck ever. In a few years, the types of things you did already with BBVA and Bankia, it’s going to be just completely different. It’s going to shoot straight up and align, like a skyscraper.
I think so, yes, because you’re doubling your resources every time you add a qubit. If you want to keep up with Moore’s Law in quantum computing, all you have to do is add one single quantum bit every two years.
Yes, it sounds like it wouldn’t be.
So, did you want to talk at all about what you guys did at Bankia, and how it was different?
Yes. Here we were, building upon the study that we did with BBVA — dynamic portfolio optimization, and the difference here is, we weren’t taking into account transaction costs. Instead of that, we were looking at a minimum holding period, and this made our problem something that says if I buy an asset now, I need to hold it for, at least in this case, seven days into the future. This is a very standard constraint for banks. This is what made our problem very, very difficult, because it means that your optimal investment today is highly dependent on your optimal investment yesterday and the day before that. The other way in which it was different is, there was sensibly more data. We were looking at a portfolio with 101,300 possible solutions. We were in a whole other league on this.
To some degree — I don’t know of any classical method that would be able to deal with that amount of variables. So, we’re in the process of taking benchmarking a step further at the moment, and we’re benchmarking our optimization algorithms against a bunch of industry standards like new models that you need to pay for. I’ll have a more complete answer to this question in the future.
But I think, to take you back up on this, there are two things that made this problem either impossible or close to impossible on classical computers. One of them is the sheer size problem, and the other one is, we were accounting for risks exactly in this problem, which is incredibly difficult to do with this type of problem. People usually take all kinds of shortcuts to avoid accounting for risks.
Sure. Maybe this is answering into the technicalities of it, but we did something that’s quite fun in this case. We were solving this, again, with the D-Wave quantum computer. D-Wave’s quantum computer can handle a certain type of optimization problem, which is quadratic optimization problems. You can only enter functions of a certain kind into your GPU.
In this case, we used a hybrid approach that allowed us to go further than this and to optimize a problem that was more complicated and quadratic and that had higher audit terms. The highest audit terms that we had in this case were to the power of seven instead of to the power of two. The way we did this was, we managed to break up our problem into many different parts and solve these independently, then run a postselection algorithm that, due to the size of the problem, would have usually taken an awful lot of time to do this, but we played a few tricks to exponentially reduce the complexity of the problem. It’s very difficult to explain without showing you a diagram. However, I’d refer you to the paper that we brought out in 2020 in collaboration with Bankia. It’s available on our website, and it’s all explained in lots of detail exactly how we did this.
Yes, absolutely, and I will, of course, include any links you want in the show notes.
No problem there. So, you guys went through a process of selecting the right types of problems to work with these customers to show real potential gains here. To our listeners — some of them from companies, maybe financial services — how would they go about picking a good use case?
This is a great question. A good use case, for quantum computing, really needs to gather four points to meet four conditions: The first one is, you want this to be a very high-value problem. Quantum computing right now is very expensive, and there’s no sense in developing a solution that uses quantum computing if you can’t justify the use of quantum computing. So, you want this to promise higher returns. You want your input to be quite small, and the reason for this is that we’re still very limited in the number of quantum bits available.
For reference, IBM’s latest quantum computer was around 50 qubits, I believe. D-Wave’s was 5,000, but compare that to your gigabytes or terabytes of computing — the amount of bits that your computer has today — so you want it to have small inputs. You want that to be many intermediate states because this is where quantum computing shines. It can explore all of these data at once, and ideally, you want your best classical alternative to be really bad. The best-case scenario is, the best solution you can get is by brute-forcing the problem, and then you’re really in business. You can find, potentially, a quantum algorithm that can do this more efficiently.
Yes, and that was one of the first appeals of Grover’s algorithm: Instead of n divided by two, we’re talking about the square root of n, so that’s a huge difference in search.
This is a really good example of, if you’re looking for an item in the list, there’s no way around it. You have to look at every item in the list until you find the item you’re searching for. So, this was the first big win for quantum computing, to be able to show you that in the quantum world, there was a way to build an algorithm that didn’t have to brute-force this problem.
That’s a very simple example. It sounds like, to any companies listening, these criteria exist in a lot of things we’re doing right now. It should be pretty easy to start identifying what you can apply these machines to. I know we touched on it numerous times, but did you have any other thoughts on this whole idea of advantage, and when, all of a sudden, the industry is going to realize that wow, this is it — there’s no going back?
Google already demonstrated a flavor of advantage on their GPU back in 2019, I believe. Like you said, a Chinese group demonstrated more recently, but I think the slightly disappointing thing — for people like me, at least, that are very conscious of the use of quantum computing in industries — was that these were not for a particularly useful problem. Google built a problem for the purpose of it being able to be solved fast on a quantum computer and on a classical computer. I believe it was information scrambling. Off the top of my head, I can’t think of any industry applications.
Very soon, now that we know that these problems exist, we’re going to be able to demonstrate actual quantum advantage on useful problems, and from that, it’s going to go very fast. Many, many groups are going to be able to show — these groups and startups, and obviously, like at Multiverse, we partner with many, hardware providers, from IonQ to D-Wave, and Xanadu to Pasqal and IBM — they’ve all got extremely exciting machines lined up to come. We’ll soon have many more realizations of quantum advantage on useful problems.
I’m going to throw a stake in the ground now and say I think we’ll have it by Christmas. What do you think?
Yes, OK. I’ll take that bet with you. I’ll be with you on that — quantum advantage on a useful problem by Christmas. Perfect.
I think that’s what we all want in the industry under the tree. That’s going to be what we want.
Absolutely. Thank you so much for having me. It’s been really fun.