Podcast | The Quantum Many-Body Problem — with Benedikt Fauseweh of TU Dortmund University

When Richard Feynman proposed the idea of a quantum simulator or computer in 1981, he was frustrated by the limitations of classical systems. He logically suggested that if we live in a quantum world, we need a quantum device to simulate all the interactions of particles that make up reality. An excellent example of such a transistor-choking calculation is the quantum many-body problem. Have quantum computers finally cracked it and reached the level of Feynman’s original idea? Can working on such approaches lead to better quantum computers and help solve real business use cases soon? Join Host Konstantinos Karagiannis for a chat with Benedikt Fauseweh from TU Dortmund University and the German Aerospace Center (DLR) as they explore these topics and whether this work has anything to do with The Three-Body Problem novel and Netflix show.

Guest: Benedikt Fauseweh from TU Dortmund University and the German Aerospace Center

The Post-Quantum World on Apple Podcasts

Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organisations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.

Read transcript +

Benedikt Fauseweh: If we can improve devices by finding better technologies based on quantum simulation, then I can do better quantum computing, which, on the other hand, enhances the quantum-simulation part I want to do on these devices.

Konstantinos Karagiannis: When Richard Feynman proposed the idea of a quantum simulator or computer in 1981, he was frustrated by the limitations of classical systems. He logically suggested that if we live in a quantum world, we need a quantum device to simulate all the interactions of particles that make up reality. An excellent example of such a transistor-choking calculation is the quantum many-body problem. Have quantum computers finally cracked it and reached the level of Feynman’s original idea? And does this have anything to do with the Three-Body Problem novel and Netflix show? Find out in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.

Our guest today is a professor at TU Dortmund University and a staff scientist at the German Aerospace Center, DLR, Benedikt Fauseweh. Welcome to the show.

Benedikt Fauseweh: Thank you so much for having me.

Konstantinos Karagiannis: To quickly orient our U.S. listeners, your role is at the German equivalent of NASA.

Benedikt Fauseweh: Yes, and we have cooperations with NASA. For example, at DLR, specifically, at my institute, which is the Institute for Software Technology, we have a software cooperation with NASA Ames, which also works a lot with quantum computers and has, for example, a D-Wave annealer. We work a lot with them as well.

Konstantinos Karagiannis: Tell us how you got there. Your career is interesting. You started where you ended up also being a professor.

Benedikt Fauseweh: In high school, I was fascinated with physics. I read books by Stephen Hawking, and I said, “I want to learn more about that,” so I studied physics in Dortmund, and I then did my PhD back in 2016, and I was mainly working on quantum magnetic systems. They are like a lot of qubits, but we don’t have such fine-grained control over them as we would like to have. Typically, they are in materials, and we want to understand what happens in these materials with these magnetic excitations.

Then, after finishing my PhD, I went to the Max Planck Institute in Stuttgart and did a lot on nonequilibrium dynamics, maybe something we will touch on today, in superconductors. I was also, for four months, at the University of Tokyo, and learned about a lot of numerical techniques. After that, I joined the Los Alamos National Laboratory in the US, and I was mainly there working on materials systems.

But at the time, we also had access to the IBM quantum computers, and we discussed in the team, “Don’t we want to do something with that? We are quantum physicists. We want to use these devices to learn something about quantum systems.” That got me into this whole field of quantum computing and quantum simulation. We did a few studies on the IBM machine, and then I went on to the German Aerospace Center because they were looking for people versed in quantum computing and quantum simulation. There, around 2022, I started to build a group on quantum computing applications.

Just last year, in November, I started, in parallel, my position as assistant professor at TU Dortmund University, going back to my alma mater, because they are very interested in these quantum-simulation topics. You also have that in the US, that more and more universities are starting to look into this whole field of quantum computing and so on. It’s kind of the same thing we have here in Germany.

Konstantinos Karagiannis: At DLR, they think of the very big. There’s nothing bigger than space: It just goes on and on and on. That’s as classical as it gets. But then, obviously, with the quantum aspect, your work deals with the very small, so we’re going to run the gamut here in the stuff we talk about.

One of the reasons I wanted to have you on was, you wrote a paper that got published in Nature. There’ll be a link in the show notes, and it has a good title — it pretty much explains it clearly: “The Quantum Many-Body Simulations on Digital Quantum Computers: State-of-the-Art and Future Challenges.” For our listeners, can you explain the quantum many-body problem?

Benedikt Fauseweh: In quantum physics, the idea is that if we look at matter — if we look at particles and so on — we realised very quickly in the last century that these particles don’t behave as classical particles. They, rather, behave as quantum particles. They have properties that are at the same time wave properties and particle properties. This is this particle/wave duality.

Describing these systems of particles, for example, on a classical level, is rather easy. If you add one particle, the complexity of your system scales linearly with the number of particles. If I add one more particle, I have to do one equation more, and that’s it. On the other hand, if I want to do the same kind of calculations with many particles and they are quantum in nature, it turns out that if I just add one more particle to my problem, what typically happens is that my problem gets twice as hard as it was before. It’s not just one more. It’s actually factor two or three or something like that. And this is what we call in math an exponential scaling: If I add a little bit more, the problem gets exponentially harder.

The quantum many-body problem is exactly this problem. I want to simulate a material — for example, the cuprate superconductors, which are very relevant in high-tech applications. If I want to describe these materials, I know they have quantum features. They are superconductors, so they are definitely quantum. But if I want to describe these 1023 particles in there, I would need an exponential amount of computing resources that would exceed all the computing resources available in the universe if we were to simulate that classically. We cannot simulate this exactly on a classical computer — it’s impossible, and the reason is that these particles are interacting with each other, and this makes it so hard to calculate material properties.

Now, of course, there are methods to treat problems that are not so strongly interacting. For example, if I take a material that is very simple, like a metal — something like copper or some other very simple metal — typically, it turns out that these materials are not so strongly interacting. I can describe them as a single particle that doesn’t field so much of the other particles, or only fields a mean field of the other particles. This allows me to make some predictions in materials science. But when it comes to these quantum functional materials — these materials that have in their fundament something that can only be explained by a collective process that is quantum in nature — it will be very hard for us to predict these features in physics.

Konstantinos Karagiannis: Listeners should be familiar with this concept: Every time you add a particle, you’re doubling. That’s the nature of qubits. If you have perfect qubits, if you have a 100-qubit system, a 101-qubit system would be twice as powerful. This is something we should already be familiar with.

The other thing to key in on is, you talked about how it gets out of hand — the computing resources. Of course, if we go back to 1981 or 1982, Richard Feynman, that’s the reason he came up with the idea for a quantum computer. He realised that to simulate this world, we need something quantum because the world is quantum.

Benedikt Fauseweh: Exactly. You made a good point about when Feynman realised this and said that if nature is quantum, our computers should also be quantum to be able to capture this behavior. And that’s the fundamental reason Feynman proposed the quantum computer: quantum simulation, for exactly this reason. He said, “I can’t do these calculations anymore. In some cases, I can do them on a piece of paper, but in most cases, in the general case, it’s very hard. And I want to do good simulations.” That’s what the quantum computer would be most useful for, for us in physics.

Konstantinos Karagiannis: Do you consider simulation of quantum many-body systems to be a fulfillment of his vision in a way?

Benedikt Fauseweh: Absolutely. At the time, of course, Feynman did not have such a nuanced picture of what we could do with quantum devices as we have as of now. That’s absolutely the case because at the time, we didn’t have these devices at all. By now, we have a more precise picture in the sense that we now distinguish between analog and digital quantum simulation. For Feynman, this analog simulation would be very much a part of his dream, because with that, we can already do simulations that are out of reach for classical HPC simulations, for example. But on the other hand, the full dream of doing any kind of simulation can only be done with digital quantum computing.

Konstantinos Karagiannis: Can you explain the difference between analog and digital, in case people are confused?

Benedikt Fauseweh: Analog platforms are quantum platforms that can be based on qubits. But in analog platforms, I have a limited set of control parameters. I can only simulate a small set of models on these platforms because I don’t have full control over these platforms. But they realised that time evolution of a quantum system very precisely for the given system that I can simulate on these platforms. On the other hand, digital quantum simulation uses a quantum computer, which is universal. I can implement, in principle, any kind of unitary action or unitary transformation on that quantum computer. But I need to discretise my time evolution into gates, and I need to execute those gates on a quantum computer. That is the difference between analog and digital platforms.

On the technology level, those two are often not so far apart from each other. For example, one of the leading platforms in analog quantum simulation is neutral atoms. There has been a lot of progress in analog quantum simulation using these neutral atoms. But by now, these platforms are slowly developing toward the digital approach because they can now do more and more fine-tuned control of these neutral atoms. This, in the future, is, of course, the hope — that we can also do full-scale digital simulations on these platforms as well so that we have the freedom to look at any kind of problem that we want in digital quantum simulation.

Konstantinos Karagiannis: I got to see early versions of, for example, the Quera machine, and they’ve come a long way since then. We talked about it on the show, but obviously, not everyone listens to every episode, so I wanted to make sure they had a little bit of a background. I appreciate the explanation, and it’s very evident that analog seems to work more in the world of simulation. You create this almost-visual representation with the qubits. Now they are trying to, like you said, create the gates and get these very discrete answers. It’s like we’re moving from the realm of Feynman to the realm of David Deutsch.

Benedikt Fauseweh: One can say so. Deutsch even made these statements before Feynman, if I’m not mistaken.

Konstantinos Karagiannis: There’s that back-and-forth in the late ’70s along those lines.

Benedikt Fauseweh: Feynman, of course, had this physics perspective and said, “I want to learn something about physics.” By now, we’re moving in the same direction. That’s the nice thing. It shows that if I want to do general digital quantum simulation, I need a general-purpose quantum computer. That is the direction we’re looking at technology-wise, as well as software-wise or algorithm-wise and so on.

Konstantinos Karagiannis: Can we talk about how analog and digital approaches differ in simulating when we get down to the nonvariational and variational approaches?

Benedikt Fauseweh: In my paper, I tried to find some kind of categorisation, at least in digital quantum simulation. I tried to divide this into two parts: the nonvariational results and the variational results. Before we talk about the details in each of these sections, we should talk about the distinction between the two. The distinction is in that variational methods typically have something like a quantum classical feedback loop where I define a cost function, which is, for example, the energy of my system. I want to minimise that energy because this gives me the ground state, which is, for physicists, the most important state of a quantum system.

I do on a quantum computer what the quantum computer can do best: It can represent quantum states in a large dimensional Hilbert space. On the other hand, I let the classical computer do what it can do best, which is minimise functions. I prepare a state on a quantum computer. I measure its energy. I take that energy as input for the optimiser on a classical computer. Then, this optimiser says, “Please adjust the following parameters in your circuit such that we can minimise the energy further.”

This is what a variational algorithm is. It has been proposed specifically for these noisy quantum computers we have right now because the idea is that variational methods are intrinsically robust against noise. For example, let’s say I have a gate that is slightly miscalibrated, so I want to do a rotation by 90 degrees, and it does a rotation by 89 degrees or so. The nice thing about the variational algorithm is that it would realise, “90 degrees is not optimal. I see that. But if you change this to 89 degrees, we get the optimal energy.” This means it’s robust against such small errors.

On the other hand, these variational methods have other drawbacks — for example, at some point, if the depth of that circuit gets too deep,  we realise we cannot optimise this circuit anymore because it’s missing a gradient. The gradient goes to zero. And if the gradient goes to zero, we have no information to go to.

In this paper, I discuss the digital quantum simulation based on these variational and nonvariational results. The nonvariational results were one of the first ideas proposed in order to do quantum simulation. Feynman said, “We want to build this quantum computer.” But he didn’t argue so much mathematically why this is possible. And then Lloyd came up and said, "If we have a Hamiltonian and we can do a Trotter-Suzuki decomposition of this Hamiltonian — this is like a discretisation of my Hamiltonian — then I can do the time evolution efficiently on a quantum computer.

Based on this so-called Trotterization, many people looked into, what can you do with these current quantum computers? They use this approach of discretising the time evolution and trying to see, how far can we get with this? For example, how large can I make the systems, and how many time steps can I do? That’s essentially the two dimensions we’re looking at: the length of the circuit and its depth.

It turned out that in many of these systems, you can get accurate results, but only if you restrict your numbers of qubits as well as the depths of the circuit you’re looking at. I did a study in 2021 with IBM devices back in Los Alamos, and there we could, go up to roughly 10 qubits that we could simulate for roughly 10 time steps.

By now, it’s in the journal Quantum Information Processing. We look into this and say, “What can we do there in time evolution, for example?” And it turns out that it was working well, but you had to do a lot of work to get this done. In the sense that if you execute these circuits without any additional preprocessing and postprocessing on the IBM devices, the output is 50% trash — something like that. You have to preprocess and postprocess your data using these error-mitigation methods, and that’s what we did. We found out you can potentially go to up to 10 qubits.

There are other things you can now do on these quantum computers without just time evolution. One can also try to realise topological systems on these devices. This was done on the Google Sycamore quantum processor, where they showed you can realise, for example, the toric code on this 2D lattice, or you can look at something we are also starting to look into much more: time crystals.

Konstantinos Karagiannis: That generated a lot of buzz.

Benedikt Fauseweh: Exactly, but they’re interesting. Why is that the case? One way to benchmark quantum computers nowadays, for example, with quantum volume. The way you do this is, you run a bunch of randomised circuits on your quantum computer, and you measure, “How often do I get the right result out of this randomised circuit?” On the other hand, you can play around with these VQE circuits or these variational circuits in order to find ground states.

Now, what is between is these time-evolution circuits, which have a clear structure, but they can still go into places in the Hilbert space that are very entangled, very difficult to describe by classical methods. Time crystals are just on the edge of this regime. Time crystals are systems you evolve in time. They’re called time crystals because, essentially, the idea is that if you evolve the system in a time t, then the system has some periodicity to it. The system itself responds with something that is two times or three times this period t. This is what we call a broken symmetry, and therefore, we call it a crystal. It’s just a fancy name.

It’s the idea that if time goes t and you go 2t, you are a crystal. These discrete time crystals are just on the verge of being barely simulatable by our classical methods. But if you add noise to them, it’s very difficult to simulate these systems. This is something that in the future will be very promising to look at more — trying to look at such time crystals and ways to stabilise these time crystals on quantum computers. So those are the directions that are looked into when we talk about the nonvariational results.

On the other hand, there’s this whole idea of using these variational methods, like the variational quantum eigensolver. The idea of this is that you have this function you want to optimise — for example, the energy. This is then evaluated on a quantum computer, and then you optimise it classically. And it was shown that with this, you can describe small quantum systems like those systems I investigated back in my PhD.

But the problem in these methods is twofold: On the one hand, you have a highly dimensional optimisation problem in front of you, so that is something you need to treat with your classical computer. And if that problem is not well-behaved, then you can end up in local minima or in something that is not good enough to be be competitive against other classical methods. The other thing is that it requires a lot of measurements, so you need to measure, at each optimisation step, a lot of observables in order to measure your energy and make a step toward the right direction.

This is why I’m skeptical of these variational methods, although I have developed two variational algorithms — not in order to find the ground state, but rather to find these time-periodic states. You can also use variational methods to solve the time-dependent problem. It’s a lot less clear whether this is sustainable in the long term. When we use more and more qubits, it’s not so clear whether these variational methods will continue to work there as well. That’s an open question.

Konstantinos Karagiannis: Do you think that with logical qubits, that would be different?

Benedikt Fauseweh: All the algorithms, be they variational or nonvariational, so far, they inherently work on a logical as well as on a noisy quantum computer. There are one or two exceptions. There are algorithms proposed that say that if you have noise in your device, this helps. But to me, they are special cases. Otherwise, all the algorithms so far we have looked at are algorithms that work on both: They work on noisy machines as well as on error-corrected machines.

Once we have an error-corrected machine, I don’t need the VQE, for example, anymore. The VQE is something I can still use, but if I have a fully error-corrected quantum computer, it might be that I can just replace certain variational methods with Trotterized methods such that I don’t need to suffer from this optimisation problem I need to solve on a classical computer. In that sense, in the end, all algorithms are quantum algorithms. They run on both devices. But whether I would use them on both devices, that’s a different question.

Konstantinos Karagiannis: For businesspeople who are listening, what are you looking to accomplish in scientific research, which is important? I like to point out that not enough people are using quantum computers for quantum physics. What do you look for there compared to, like, what this type of work might yield in the more practical applications — optimisation and things like that?

Benedikt Fauseweh: Quantum simulation itself has a lot of industry applications. If I think about materials science, for example, there are a lot of things that are not simulatable with classical computers — and where, for example, a quantum computer might help me to understand certain kinds of materials much better. Right now, many kinds of simulations are based on molecular dynamic simulations. If I can go down one level and say, “What about the quantum effects? Can I treat them as well?” it gets interesting for various kinds of industries, be it the chemical industry, be it pharmaceuticals and so on.

In this realm, definitely. this can have an impact. It might be a little bit down the road because it depends also on how well we develop algorithms for these devices. But in the long term, it will be something that will have an effect in these industries as well. If we think about materials science, where it comes to quantum functional materials, then we’re on a much closer road. If we think about materials for metrological applications — quantum metrology, for example, where I want to sense something with a much higher precision than before — it helps to have entangled states. And these entangled states, I can understand much better with a quantum computer.

If I say I want to have a quantum sensor and I want to have the best entangled state such that it can detect a magnetic field to a much higher precision, I can simulate that entangled state on a quantum computer, and this will immediately affect the performance of my sensor. And with the sensor, I can detect broken wings on an airplane.

If there is something that can detect this much more efficiently, that is worth something. That is something that can change a lot in quantum metrology. On the other hand, I draw a big picture of quantum simulation in the whole field of quantum technologies. The more opinionated part of this article is where I say, “What role does quantum simulation actually play in this whole realm of quantum technologies?” To me, quantum simulation allows us to understand quantum matter and technologies on a much better level than before, which could then lead to spin-off applications such as topological qubits. Maybe I can find topological qubits by simulating them first in a better way, and then I realise, this material is a topological material, and it hosts topological qubits.

Or, if I talk about spintronic applications, spintronics has the idea of using spin degrees of freedoms in materials in order to do ultrafast manipulation of classical data. This is something we still need. We still need to do a lot of simulations with classical data. And if I can improve our classical devices by going from ferromagnets to anti-ferromagnets, this means I would change the typical frequency with which they work from gigahertz to terahertz. If I now tell you, “Your disk drive will be 10,000 times faster by switching to terahertz,” that’s important information for all the big data guys out there, so that’s something: “That’s a fast hard drive — we could use that in AI,” for example.

Then, if we have these quantum technologies, they can enhance our quantum computing platforms. So far, we have still have our problems with noise in these platforms — with scaling them and so on. If we can improve these devices by finding better technologies based on quantum simulation, I can do better quantum computing, which enhances the quantum-simulation part that I want to do on these devices.

Konstantinos Karagiannis: This is a long way since Feynman. Do you see this as an infinite loop we might enter where the quantum computers improve quantum computers and back and forth?

Benedikt Fauseweh: That’s exactly the kind of picture I have in mind because, right now, look at the superconducting devices. They are using aluminum-based superconductors, and they have problems. They are amorphous materials. Due to their amorphous nature, they get defect states that are talking to the qubits. These are the famous two-level systems that are talking to these qubits and are ruining hours of measurements because suddenly there is this two-level state that appeared out of nowhere, and suddenly it’s gone again. If we can use better materials there and understand quantum matter on a better level, I can engineer new kinds of quantum computers based on these new materials.

The idea is that we need something to get this quantum-computing circle starting. If you compare this to Moore’s law, what was the reason Moore’s law worked so well? Moore’s law, as a reminder, is the law that in classical computing, we roughly get a doubling of our processing power every two years. This was based on the fact that our classical circuits are miniaturized at the same speed, so we get smaller and smaller length scales such that I can do more and more computation on the same die size.

Konstantinos Karagiannis: Squeeze more transistors.

Benedikt Fauseweh: Exactly. Squeeze more transistors in there and get more out of it for the same cost. But with Moore’s law, we cannot convert that to quantum computing. Let me give you a very simple reason for that: Trapped ions work with single ions as their qubits. I cannot miniaturize an ion anymore. It’s one ion, and that’s it. This approach I had in classical computing, this doesn’t work for quantum computing. It’s not just about the miniaturization. It has to be another concept.

This other concept is that quantum technologies self-improve by improving our knowledge about these systems and thereby trying to get this circle of technology running. This could then be the driver for a quantum Moore’s law that has a continuous improvement of the quantum computers in it. And here, quantum simulation is one of the main players because only with quantum simulation do I get this better understanding of these devices and these materials and so on. That would, of course, be a whole new industry. But that is speculative.

Konstantinos Karagiannis: I always laugh when I see in a movie that someone shrinks down to the quantum level — what, exactly are you made of right now? There’s nothing smaller. I don’t know what your little particles are made of right at this moment.

Benedikt Fauseweh: Yeah. This was Ant-Man. I saw that movie, and I thought, “Maybe not so much, but it’s a nice try.”

Konstantinos Karagiannis: But it’s a good point that because we can’t miniaturize these things, we’re going to need other ways to double.

This is tied in with your work with the space agency: People are familiar with the quantum many-body problem. Maybe if they’ve experienced tensor networks, it comes up there and in other places. But they’ve also heard of a different kind of many-body problem with actual giant bodies: the classical three-body problem. That’s getting a lot of headlines lately because the Netflix show based on the amazing books.

I have to ask you because people, probably a thousand times during this podcast, have been wondering, can any of this be applied to the Proxima Centauri system: “You’ve got a couple of stars and a planet.” Can this improve any of our understanding of something like that?

Benedikt Fauseweh: It’s great that you asked that because I have one comment about your question and one comment about the series, because I think the series is great. Let me first answer your question. Indeed, there is a connection there: so-called few-body quantum systems. These are like the planets orbiting the stars and so on in that it’s not so many bodies. It’s a few particles, but already, this few-body problem, if it’s not in a crystal, but if it’s a continuous space, this quantum problem is very hard to solve with classical HPC methods.

My good colleagues in DLR at the Institute for Quantum Technologies also look into this and look into few-body problems, which are really related: The quantum mechanical three-body problem is also very complicated. It’s not solvable analytically, as is the three-body problem of classical mechanics.

The other thing is, of course, that if we look at cosmology, if we look at high-energy physics, if we look back at what happens if we are close to the big bang, quantum plays a very big role. The quantum laws of quantum mechanics play a very important role in the sense that we have very dense matter and it’s interacting. It’s a many-body problem we’re looking at here. Doing simulations in these kinds of environments is very difficult. It’s the same thing if we look at the atomic nucleus, trying to derive quantum states in quantum chromodynamics, which describes the atomic nucleus. It’s a quantum field theory, and it’s a strongly interacting many-body problem.

There are all kinds of connections from the smallest — the atomic nucleus —  to what if I have the big bang and I have very dense matter there? This is also a quantum problem I need to solve. I know a lot of good colleagues coming from high-energy physics who want to answer these questions, and they try to do this with a quantum computer. It could have an effect on our knowledge about the universe as well, which would, of course, be great. This is something I’m looking forward to.

The comment I have on the series is, the series is great. It just depicts one thing as really strange. As you know, in this Netflix show, the particle accelerators are doing something completely strange. The physicists are all, like, “Now I’m losing my job,” and so on. They’re all super worried. The thing is, if this would happen in real life, the reactions would be the exact opposite because what a physicist would say at that moment is, “I don’t understand this. This is great. This is something new. Oh, my God. I want to understand this. And new stuff is happening. Oh, my God!” That’s the greatest thing that can happen to a scientist — if he gets data he doesn’t understand. That’s something to learn. That’s the moment where I say, “Great — now the work starts! Now I can think about this and why this happens” and so on.

Konstantinos Karagiannis: It would be a dead end because they were messing with devices.

Benedikt Fauseweh: Yeah, they try to stop us. But in the end, in the show, that’s shown as they are trying to stop us, but in reality, they would help us indirectly by showing us what is possible to do, and therefore, it blows our minds, and they try to do something new they haven’t even thought about. In that sense, they would help us in real life if we would get such crazy new physics we can examine.

Konstantinos Karagiannis: My final thought on the show is that instead of building games to train people and simulate what’s going on and flee your world and spend 400 years in space, maybe they should have just built a quantum computer. That’s really simplifying it. You never know. It might have helped them understand the perturbations better. It’s funny that it’s never mentioned.

Thank you so much for digging into all this work with us here, and I’ll be linking everything in the show notes so people can check out your work, and I hope to talk to you again one day in the future.

Benedikt Fauseweh: Thank you so much. This was very nice.

Konstantinos Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

The quantum many-body problem is the challenge of simulating quantum systems with a large number of particles that interact with each other. As the number of particles increases, the computational overhead of simulating such a system increases exponentially. As Richard Feynman noted in 1981, to simulate such quantum systems, we need a quantum device, and we’ve done so with analog and digital approaches.

Analog quantum simulation has shown an advantage over classical computers in that it tunes a physical device using trapped ions, cold atoms or other elements and represents the desired quantum system being simulated. However, there are still limitations to analog approaches. Digital quantum simulation hopes to simulate systems that don’t fit on an analog device and that evolve over time in steps implemented using gates. This is more what you think of when you think of a quantum computer.

Digital quantum methods include variational and nonvariational methods. Variational methods are robust against noise in the NISQ era and use a combination of classical and quantum systems in a feedback loop to minimise a system’s energy and get an optimal answer. Nonvariational methods rely on discretising the time evolution of a Hamiltonian and can handle a system’s real-time dynamics. Of course, we’re still limited by qubit counts, circuit depth and noise, but regular listeners know we’re making a lot of progress in all these areas.

What can we do with these simulations as they improve? Quantum simulation can improve quantum sensors by simulating entangled states and enhancing their performance. In cosmology and high-energy physics, quantum simulation can help us understand the universe and solve complex many-body problems. Perhaps most important to QIS, quantum simulation may play a crucial role in advancing our knowledge and driving the development of better quantum technology. And as we half-joked on this episode, a quantum computer might have been all the aliens in The Three-Body Problem needed to save their world.

That does it for this episode. Thanks to Benedikt Fauseweh for joining to discuss his work, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.