Transcript | Will Portfolio Optimisation Prove Quantum Advantage this Year? Listen Finding a specific use case that proves quantum advantage will radically kickstart the quantum computing industry. While universal gate-based quantum computers with error correction are a few years away, annealers;- quantum systems from D-Wave - are proving excellent at specific types of problems today, especially in a hybrid approach aided by classical computers. Just how excellent are these systems? We might finally have an example of portfolio optimisation that provides a thousandfold boost in speed. Is 2021 the year for advantage? Guest Speaker - Sam Mugel, CTO, Multiverse Computing Listen Konstantinos We’ve talked about quantum advantage in past episodes — as an idea. Today, we’re going to dive a little deeper to a recent experiment that provided a thousandfold boost in speed. This isn’t a theoretical problem; it’s real portfolio optimisation with a real edge provided by D-Wave’s hybrid classical quantum system. Is 2021 the year for advantage? Find out in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead quantum computing services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is CTO of a company called Multiverse, which is a great name if you’re into the many-worlds interpretation of quantum mechanics, like I am. He’s working on something pretty cool — basically, bringing quantum computing use cases to finance, and that’s near and dear to my heart, so I’d like to welcome Sam Mugel. Samuel Thanks, Konstantinos. It’s a real pleasure to be here. Konstantinos So glad you can make it. Why don’t you tell us a little bit about your company and the disruptive work you’re doing in finance? Samuel We are a startup — a fintech startup — and we’re using quantum computing for Fortune 500 companies that want to gain an edge through deep technology. We’re using an awful lot of techniques from quantum computing, from deep physics, and applying these to real industry-relevant problems. I’d be thrilled to tell you a little bit more about that. Konstantinos That sounds terrific. You guys started as a not-for-profit, right? Samuel Yes, that’s correct. Four years ago, Multiverse started out as a WhatsApp group. We weren’t called Multiverse at the time; we were called Quantum for Quants, and we had all these contacts over in Europe. They were asking us, “What is quantum computing?” So, we set out with this company to answer exactly this question and publish a few scientific articles. In one of these, we dove deep and looked at every single quantum computing algorithm and tried to say how can this be relevant for finance, where can it bring an advantage, and by the time we’ve done this homework, we’d identified the real low-hanging fruit that could be developed today with the current hardware. Basically, we were just like, why is nobody doing this? So, we incorporated as a company — that would have been almost exactly two years ago. We were approached by the Creative Destruction Lab, and so now, we’ve got a foot in Toronto as well, and we’ve been developing killer quantum applications ever since. Konstantinos So, you guys are working in optimisation and quantum machine learning? Samuel That’s right. Quantum optimisation was really the low-hanging fruit — first use case is the finance that we identified. I’ll be telling you a little bit more about that afterward. Quantum machine learning is a much more ambitious project we’re starting to dig our teeth into at the moment, and there is some interesting stuff that you can do over there in terms of like fraud detection. There are lots of open problems in finance and in terms of things like credit scoring. There are lots of billion- or trillion-dollar verticals over there. Another one would be life insurance and estimating how to price these things. There’s a lot of really interesting stuff that can be done. Konstantinos Yes, it sounds like we’re working on a lot of the same things — fraud detection, we’re actively doing that right now. It’s going to have a huge impact, of course, and I think we’re going to see examples of that in some of the stuff you’ve already done. I wanted to have you on because I found your work really impressive and — not to get too far ahead of ourselves, but I think of you as one of the frontrunners of someone who might prove quantum advantage, but just saying, I don’t know. Samuel I hope you’re right. Konstantinos I wanted to have you on like “I knew him when.” Did you want to talk a little bit about like why computing can be accelerated by quantum, and why quantum computing is relevant to finance in particular? Samuel The way that I like to explain this is that in regular computing, you have your bits that can be zero or one, and then you apply all these binary operations to them, and this allows you to perform computations, essentially. In quantum computing, your smallest unit of information, your quantum bit, can be zero and one at the same time. This is analogous to how Schrodinger’s cat, for instance, can be dead and alive simultaneously. This is really partly quantum awareness — quantum properties of our world. The interesting thing for computing is that when you apply an operation to your quantum bit, you’re simultaneously applying it to both states. So, if I apply an operation — I knot operation to my quantum bit, for instance — I’ll be performing the knot on both states simultaneously. So, you can already see that quantum computing gives you, naturally, access to parallel computing, whereas in classical computing, you have to have dedicated hardware to do this. The other interesting thing is that the amount of states that are available double every time I add a quantum bit. So, by the time I have 330 quantum bits, I can perform more operations simultaneously than there are atoms in the visible universe. So, this immediately tells you that quantum computing gives you access to computational resources that would be completely impossible classically. Konstantinos Yes, I try to get that across — like, we’re almost to the point where you can’t simulate the process, or is that a redevelop now? And maybe next year, we’d have to turn the entire planet Earth into a classical computer to simulate them. So, I try to get that across. Samuel There you go. Konstantinos Yes, it always baffles people. I love that chart that you can visualise for how quantum computing is increasing so rapidly. It is time to take advantage of it. For finance, we do end up using with big data sets. That’s where some of the biggest advantage might come in. Samuel Absolutely. There’s a real problem in finance and data science in general: We’re a lot better at gathering and storing information than we are at processing it. We’re gathering this exponential amount of information, but the amount of computational resources we have at our disposal is starting to stagnate. So, that’s a real problem for huge banks, for instance — they turn around, and say, “Hey, we’ve got all this data now. How can we get value out of it?” There’s a real hope that quantum computing will help these big institutions address that problem. Konstantinos Yes, and in the short term now — in this near term — it seems to be that we have a real shot at doing it in a significantly better way using annealing, right? Samuel Yes. Quantum annealing is a fantastic analog computing tool for optimisation problems. It’s one of the first examples of quantum computing that we’ve had. Essentially, you wouldn’t think of it as a general-purpose computer. It’s really a computer that natively solves optimisation problems, which are everywhere in finance. Konstantinos Yes, that’s an important point. You guys applied the D-Wave annealer to something that is really what led to this episode. I definitely wanted to get to that. You did a BBVA case study, or a use case? Samuel That’s right. We were working with their capital markets team on portfolio optimisation applications. BBVA, which we were working with, is the world’s 47th-biggest bank — quite a major entity. We were really thrilled for this opportunity to work with them, and we decided on portfolio optimisation because it’s a very, very difficult problem for classical computing — it grows exponentially with the number of assets or the number of time states that you might consider. Also, it’s a nice problem because it’s applicable in many areas in finance: You can apply it to back testing, you can apply it to index tracking, you can apply it to stress-testing. There are lots of really exciting, high-value applications there. The problem that we solved with BBVA was, given forecasts about market data, what would be the optimal portfolio, taking into account transaction costs? Essentially, we’d have to determine the optimal investment trajectory, taking into account transaction cost and taking into account risk of the portfolio. This was a staggeringly difficult computing problem for a classical computer. Konstantinos Yes, and what blew my mind was, normally, when you’re doing a POC, you start small — you get a small set of data to prove that you can do it — and then you try to extrapolate. You guys decided, “Nah, we’re going to get the biggest dataset on planet Earth and go that way instead.” You guys used the XXL datasets. Did you want to talk a little bit about the implications of that dataset size? Samuel BBVA really dropped us in the deep end. The biggest dataset we considered, though, the XXL dataset, was four years of data with monthly rebalancing — an awful lot of potential trajectories — and I believe we were initially given 54 assets. We used some really sophisticated clustering methods to determine the eight most representative assets. The final portfolio we considered was eight assets. We had some constraints on how much we would be able to invest in each asset, and then over four years of data. I believe this came down to 10400 possible solutions — way more than the atoms in the visible universe. Konstantinos Yes, way, way, way more. It’s 1090, or something like that. Samuel Yes. Konstantinos So, that started to show the limitations of classical, because you guys were comparing the same approach to using TensorFlow? Samuel We had several methods. One of them was a standard library optimisation for Gecko. We did not consider TensorFlow, but we did look at Tensor Networks methods that are some in-house algorithms that we developed. We also ran on the side something like hybrid quantum computations on the IBM chip. Konstantinos That’s right, yes — I remember. Samuel And here, D-Wave had a little bit of a leg up because it’s the problem that’s natively solved by that particular GPU. What we found was that rapidly, we got into problem sizes that even Gecko standard libraries or some exhaustive solvers that we coded ourselves and also the IBM chip could not even begin to consider. The only two solvers that were actually able to solve this XXL dataset were our Tensor Networks applications and the D-Wave solver. The interesting thing is that while our Tensor Networks tends to converge to a slightly better solution, this would generally take a full day of having the computer running to get results from this, where it took the D-Wave quantum computer 170 seconds. Konstantinos Yes, that was the moment for me. I was like, “Whoa, wait a minute. So, we’re taking a practical-sized data, we’re doing it in 170 seconds instead of a day,” and the accuracy wasn’t really that far off. Samuel It wasn’t that far. We were looking at a metric with the Sharpe ratio, which is a ratio of returns to risk, and a Sharpe ratio of eight is generally considered to be risk-free — unheard-of investment. I think we were getting Sharpe ratios of 12 with D-Wave, and up to 14 or 16 with Tensor Networks — staggering performance. Konstantinos Yes, I couldn’t believe how close it was. I wrote it down somewhere — it’s like 12.16 for D-Wave and 15.83 for Tensor. So, if you’ve got that and the timing, how is this not the approach to a form of supremacy for advantage — we’re edging along here. Did you want to talk a little more about Tensor Network algorithms? Samuel I’ll tell you a couple of words on them. These are classical algorithms, but they’re borrowing ideas from quantum information theory. The idea here is that we’re going to borrow ideas that tell you where the information content is in the problem. This can really boost the performance of classical computing algorithms. There’s lots of interesting work in applying this to machine learning — we’re doing part of that work. Another big player in this domain is Google. In this particular use case, we were using it to try to figure out which parts of the solution space we should be focusing on. The analogy here is, imagine you want to find the lowest point in a chain of mountains, and then you’ve got a smart method in this case, Tensor Networks, to tell you where all the valleys are, and then you don’t have to bother about exploring all the peaks. You can just bother exploring all the valleys. Konstantinos Interesting. So, is it strategic to still develop Tensor Network algorithms? Samuel Oh, absolutely. For us, our clients don’t really care about how we get to a solution. What they want to know is that we’ve given them the best solution the technology can provide. When you’re in early-stage quantum computing like this, it’s nice to have a method for pushing classical computing to its absolute limit. Additionally, it also tells you what classical computing can do well. So, then we can isolate that, not get bottleneck problems, and then shoot all that off to the quantum computer and then solve everything we can classically use in Tensor Networks. Konstantinos It’s that hybrid approach — doing it in pieces — that’s going to give us our first real benefit to using quantum in any way, shape, or form. That’s how we’re going to get our short-term gain. Would you agree? Samuel You’re absolutely correct. Being able to use classical computing is good for what you can use it for, and then solving the really hard stuff on quantum computers absolutely is the key to advantage. Another key to it would be to use analog platforms. I see a lot of promise in these short-term. So, things like D-Wave, Xanadu, Pasqal — these are all quantum computing startups that are developing analog quantum computers, and the logic here is, these machines solve the problem natively, whereas machines that tend to be universal are approximating gates and qubits, and all this adds errors and reduces coherence. Konstantinos In the short term, it’ll be the time for those machines to shine. This will be the glory years for two years or so. Then everyone is going to be distracted when we’re starting to see advantage in universal gate-based machines. Samuel I totally agree. Short-term, we’re going to see some impressive performance from analog machines and quantum annealers. Long-term, when we have a universal-error-corrected, gate-model computer, then they’re going to be able to do everything that the analog machines do, and probably, people will leave aside the analog machines, but I don’t think this will be before well into that 2030s. There’s probably some exciting results to be had already now from analog machines. Konstantinos Yes, it’s time to use our D-Wave monthly credits — time to cash them in. Samuel I couldn’t agree more. Konstantinos I want to talk about another study you guys did, but would you say that the BBVA one has a good extrapolation somewhere in it of when we will have advantage? Do you see something you can tweak to get us even closer, like better accuracy, even faster — something that could really throw a stake in the ground? Samuel I’m going to say yes and no. The reason why I say no is that, for me, advantage has simultaneously, a practical and a mathematical problem. The really difficult mathematical question here is, can you prove that no classical algorithm could give you this performance in a reasonable amount of time? The typical example would be factorising really large numbers. Maybe you can do this a lot faster on a quantum computer, but can you prove that you couldn’t do this with the same performance on a classical computer? This is very hard to do, because all we know at the moment is that we don’t have an algorithm to classically, efficiently hack to large numbers, but it’s very difficult to prove that this is impossible. In that sense, I’d say that our study is not sufficient to bring us closer to quantum advantage. However, it is a huge step in the right direction, and this is my yes. We’ve shown that quantum computing can outdo known methods for a really difficult problem, which is this portfolio-optimisation problem. The nice thing is, we’ve shown this on a very practical and valuable and tangible problem — initially, one for which the classical alternatives for this are not actually that good for optimising these problems that have no nice mathematical structure. Typically, people would use heuristic algorithms for this type of thing, and these algorithms don’t generalise very well. What’s nice about quantum annealing is that there are mathematical proofs that this will theoretically converge to the optimal solution. This is really the nice result of our study, that we’ve been able to outdo standard classical methods on a very practical problem. Konstantinos I agree. It’s the practicality. Who cares if we can make up something, run it, and say, “Look, it’s better,” like what they did in China with that photonic system. It was a completely useless problem, but they proved that you can do it. Samuel Exactly. This is the typical thing of, oh, yes, a quantum computer is just so much better simulating being a quantum computer. Konstantinos But this way, we have something usable, and the other distinction is, once we achieve something that seems like advantage, the quantum computers are going to get just a little better, and as we’ve already discussed, a little better means a lot better. Add a few qubits, and then, whoa, it’s a lot better at doing it. Once it takes off, it just keeps taking off. Samuel Absolutely. Konstantinos So, there won’t be neck-and-neck. Samuel There’s a real social advantage to this as well. Think of how classical computing got to where it is. Sorry, I’m taking you completely off topic. Konstantinos Oh, no, it’s fine. Samuel Classical computing got to where it is because there are many trillions that have been invested in the silicon industry. Once quantum computing can show that it can catch up to that, we’re going to start seeing similar investments in the quantum computing industry as well. Konstantinos Yes. Samuel Once it goes a little way, it’s going to go fast. Konstantinos It’s just a simple comparison of lines on a graph. Moore’s Law looks one way. Quantum is going to look very different. It’s not going to be neck-and-neck ever. In a few years, the types of things you did already with BBVA and Bankia, it’s going to be just completely different. It’s going to shoot straight up and align, like a skyscraper. Samuel I think so, yes, because you’re doubling your resources every time you add a qubit. If you want to keep up with Moore’s Law in quantum computing, all you have to do is add one single quantum bit every two years. Konstantinos I know. That’s not very difficult to pull off. Samuel Yes, it sounds like it wouldn’t be. Konstantinos So, did you want to talk at all about what you guys did at Bankia, and how it was different? Samuel Yes. Here we were, building upon the study that we did with BBVA — dynamic portfolio optimisation, and the difference here is, we weren’t taking into account transaction costs. Instead of that, we were looking at a minimum holding period, and this made our problem something that says if I buy an asset now, I need to hold it for, at least in this case, seven days into the future. This is a very standard constraint for banks. This is what made our problem very, very difficult, because it means that your optimal investment today is highly dependent on your optimal investment yesterday and the day before that. The other way in which it was different is, there was sensibly more data. We were looking at a portfolio with 101,300 possible solutions. We were in a whole other league on this. Konstantinos Oh, yes, that’s such a painful number. Would any of this then be considered impossible to solve with classical? Samuel To some degree — I don’t know of any classical method that would be able to deal with that amount of variables. So, we’re in the process of taking benchmarking a step further at the moment, and we’re benchmarking our optimisation algorithms against a bunch of industry standards like new models that you need to pay for. I’ll have a more complete answer to this question in the future. But I think, to take you back up on this, there are two things that made this problem either impossible or close to impossible on classical computers. One of them is the sheer size problem, and the other one is, we were accounting for risks exactly in this problem, which is incredibly difficult to do with this type of problem. People usually take all kinds of shortcuts to avoid accounting for risks. Konstantinos Everybody heard it here. We’ve got something possibly impossible to solve. We’ve got the speed. It’s all sounding good to me. Did you want to talk at all about — so, this is a huge amount of data — anything that was involved in optimising it? Postprocessing? That kind of thing? Samuel Sure. Maybe this is answering into the technicalities of it, but we did something that’s quite fun in this case. We were solving this, again, with the D-Wave quantum computer. D-Wave’s quantum computer can handle a certain type of optimisation problem, which is quadratic optimisation problems. You can only enter functions of a certain kind into your GPU. In this case, we used a hybrid approach that allowed us to go further than this and to optimise a problem that was more complicated and quadratic and that had higher audit terms. The highest audit terms that we had in this case were to the power of seven instead of to the power of two. The way we did this was, we managed to break up our problem into many different parts and solve these independently, then run a postselection algorithm that, due to the size of the problem, would have usually taken an awful lot of time to do this, but we played a few tricks to exponentially reduce the complexity of the problem. It’s very difficult to explain without showing you a diagram. However, I’d refer you to the paper that we brought out in 2020 in collaboration with Bankia. It’s available on our website, and it’s all explained in lots of detail exactly how we did this. Konstantinos Yes, absolutely, and I will, of course, include any links you want in the show notes. Samuel Fantastic. Konstantinos No problem there. So, you guys went through a process of selecting the right types of problems to work with these customers to show real potential gains here. To our listeners — some of them from companies, maybe financial services — how would they go about picking a good use case? Samuel This is a great question. A good use case, for quantum computing, really needs to gather four points to meet four conditions: The first one is, you want this to be a very high-value problem. Quantum computing right now is very expensive, and there’s no sense in developing a solution that uses quantum computing if you can’t justify the use of quantum computing. So, you want this to promise higher returns. You want your input to be quite small, and the reason for this is that we’re still very limited in the number of quantum bits available. For reference, IBM’s latest quantum computer was around 50 qubits, I believe. D-Wave’s was 5,000, but compare that to your gigabytes or terabytes of computing — the amount of bits that your computer has today — so you want it to have small inputs. You want that to be many intermediate states because this is where quantum computing shines. It can explore all of these data at once, and ideally, you want your best classical alternative to be really bad. The best-case scenario is, the best solution you can get is by brute-forcing the problem, and then you’re really in business. You can find, potentially, a quantum algorithm that can do this more efficiently. Konstantinos Yes, and that was one of the first appeals of Grover’s algorithm: Instead of n divided by two, we’re talking about the square root of n, so that’s a huge difference in search. Samuel This is a really good example of, if you’re looking for an item in the list, there’s no way around it. You have to look at every item in the list until you find the item you’re searching for. So, this was the first big win for quantum computing, to be able to show you that in the quantum world, there was a way to build an algorithm that didn’t have to brute-force this problem. Konstantinos That’s a very simple example. It sounds like, to any companies listening, these criteria exist in a lot of things we’re doing right now. It should be pretty easy to start identifying what you can apply these machines to. I know we touched on it numerous times, but did you have any other thoughts on this whole idea of advantage, and when, all of a sudden, the industry is going to realise that wow, this is it — there’s no going back? Samuel Google already demonstrated a flavor of advantage on their GPU back in 2019, I believe. Like you said, a Chinese group demonstrated more recently, but I think the slightly disappointing thing — for people like me, at least, that are very conscious of the use of quantum computing in industries — was that these were not for a particularly useful problem. Google built a problem for the purpose of it being able to be solved fast on a quantum computer and on a classical computer. I believe it was information scrambling. Off the top of my head, I can’t think of any industry applications. Very soon, now that we know that these problems exist, we’re going to be able to demonstrate actual quantum advantage on useful problems, and from that, it’s going to go very fast. Many, many groups are going to be able to show — these groups and startups, and obviously, like at Multiverse, we partner with many, hardware providers, from IonQ to D-Wave, and Xanadu to Pasqal and IBM — they’ve all got extremely exciting machines lined up to come. We’ll soon have many more realisations of quantum advantage on useful problems. Konstantinos I’m going to throw a stake in the ground now and say I think we’ll have it by Christmas. What do you think? Samuel Yes, OK. I’ll take that bet with you. I’ll be with you on that — quantum advantage on a useful problem by Christmas. Perfect. Konstantinos I think that’s what we all want in the industry under the tree. That’s going to be what we want. Samuel Yes, and I think that would definitely make 2021 a really memorable year, right? Konstantinos Something to turn this around a little bit. Yes, I think that’s what’s going to happen. As we’re all sweating because this is going to air now — it’s going to be summer — we’ll all be thinking about a cozy Christmas with quantum advantage under the tree. Thank you so much for coming on. I really enjoyed this, and I hope we can cross paths numerous times and work on something amazing one day. Samuel Absolutely. Thank you so much for having me. It’s been really fun. Konstantinos That does it for this episode. Thanks to Sam Mugel for joining today to discuss Multiverse Computing and their financial use case, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker — that’s “Konstant with a K Hacker.” You’ll find links there to what we’re doing in quantum computing service at Protiviti. You can also find information on our quantum services at www.protiviti.com or follow ProtivitiTech on Twitter and LinkedIn. Until next time, be kind and stay quantum curious.