Podcast | AI-Powered Digital Twins for Quantum Hardware — with Quantum Elements 24 min read The “Constellation Platform” is revolutionizing the path to fault-tolerant quantum computing. Rather than relying on traditional state vector simulations that hit a wall at 50 qubits, this platform from Quantum Elements uses a breakthrough method, stochastic compression, to create realistic digital twins of quantum hardware at scale. This approach allows developers to simulate the complex time evolution of a system, including specific noise models such as crosstalk and decoherence, without the high cost of running thousands of shots on physical QPUs.Host Konstantinos Karagiannis sits down with Izhar Medalsy, co-founder and CEO of Quantum Elements, to also explore the cutting-edge intersection of AI and quantum engineering, highlighting a “quantum copilot” in Constellation powered by Claude. This AI-native platform doesn't just help write context-specific code; it also acts as a virtual supervisor that can troubleshoot failed experiments by comparing real-world results with first-principles simulations. Whether you are a professional looking to optimize algorithms through advanced error suppression or a researcher seeking a hardware-agnostic layer to run code across different vendors, listen and learn how digital virtualization might play a key role.Guest: Izhar Medalsy from Quantum Elements Topics Digital Transformation The Post-Quantum World on Apple Podcasts Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organizations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability. Subscribe Read transcript + “We really went through the efforts of making this platform AI‑native. Imagine you have a result that you're not sure what it is. You can upload the result to our quantum copilot and it will give you guidance on the reasons that this experiment didn't work and ways to fix it. We have the ground truth for your system because we're able to simulate it from first principles.”Konstantinos Karagiannis: We've had quantum simulators for quite a while, but what if we could create actual digital twins of QPUs? Quantum Elements found a way to — potentially letting developers save thousands of dollars by tweaking their circuits on such twins. We're talking about simulating the complex time evolution of a system, including specific noise models like crosstalk and decoherence. Built‑in access to AI can even help with troubleshooting. Find out if digital virtualization of real quantum computers is what the industry has been missing in this episode of The Post‑Quantum World.I'm your host Konstantinos Karagiannis, leading Quantum Computing Services at Protiviti, where we're helping companies prepare for the benefits and threats of this exploding field. I hope you'll join each episode as we explore the technology and business impacts of this post‑quantum era.Our guest today is the Co‑founder and CEO of Quantum Elements, Izhar Medalsy, welcome to the show.Izhar Medalsy: Thanks for having me, Konstantinos.Konstantinos Karagiannis: People listening are probably wondering, “Is this a chemistry thing?” “Why is it called Quantum Elements?” Let's set the record straight — give us the elevator pitch for what your company does.Izhar Medalsy: I think that's a great question. Fundamentally, we are providing the elements that allow us — and the industry — to build quantum computers at scale. There are many elements and components required to make those quantum computers work, and we're tackling them by starting from optimizing the hardware all the way to making quantum algorithms work. As a software company, we're delivering the components that optimize and facilitate quantum computers at scale with higher performance.Konstantinos Karagiannis: AI is very much a part of this throughout, and we're going to be touching on how different machine learning techniques — not just isolated techniques — are helping your product exist. It's called the Constellation Platform, right? If someone were to try to visualize it, they might think of it as a way of simulating real hardware and improving the development of code on real hardware. That way, when you actually run it on physical hardware, it's better and more cost-effective. There are a lot of benefits to it, and we'll get to some of those. So, first of all, what kind of simulation are we talking about? Is this a state-vector simulation like we've seen in the past, or is this something different?Izhar Medalsy: It's very different. Think about having at your disposal a virtual quantum computer at scale that takes into account all the governing parameters of the hardware, including the factors that make this hardware work in a non-ideal way — meaning the noise and the environment. As we know, quantum systems are very finicky and obviously suffer from issues related to crosstalk, noise, coherent noise, etc. The ability to consider all of that at scale and make it available to the end user for any modality provides you with an engine that is, on one hand, very flexible, but also extremely realistic.In other words, you get a digital twin of your quantum hardware at scale. This now allows you to ask questions not only about the state of your system today, but also about how the system will evolve over time and how to make that system—and the applications you’re going to run on it — perform at their best. That’s really, in a nutshell, the engine you have behind the scenes. On one end, you have the physical representation of your system. On the other, you have the ability to build this kind of virtual machine at scale, with a large enough number of qubits to address all the different layers that make up a working quantum computer.So, if you were to run something on your engine, you can work out the kinks for orders of magnitude less money than running it over and over again—taking real shots and running up a bill depending on what hardware you’re using. The idea is not just to develop your code better, but also to predict the kinds of things that could go wrong and prevent them from happening.Konstantinos Karagiannis: Does that improve the accuracy of what you develop as a circuit?Izhar Medalsy: Exactly that. One of the things that we know is very limiting in a quantum process is the fact that you can only ask a question once. So, once you measure the state of your system, you disrupt it in a way that is unrecoverable. When you have a realistic digital twin, you now have the ability to eliminate and target the different governing principles and noise models that are part of what makes your hardware your hardware — your specific system. By doing this, you're able to pinpoint areas of interest and address them in a very surgical and accurate way.In that regard, you can think about this machine as providing you with the ability to look inside and understand what the limiting factors are in the performance of the machine. On the other hand, it allows you to experiment with things that would take much longer and cost much more if you were to do them on real hardware. For instance, what if you want to design a different chip or configuration layout? What if you want to examine long-range interactions between different qubits? Or what if you would like to explore new mid-circuit error suppression strategies at scale? All of that is relatively hard on hardware simply because the hardware is what it is. But if you're able to dissect and address different components individually, you have the freedom to pinpoint where the issues are, address them, and then proceed from there.If you were to work on this engine as an end user, you would have to pick a target — like which processor you want to simulate, which companies you have in mind, and how many basic machines you can choose from. Obviously, you need to focus on where the majority of usage is. Currently, we're mainly focusing on superconducting qubits and expanding to ion traps and neutral atoms. You can start by simulating, let's say, an IBM machine or a Rigetti machine, or you can look at an effective model — meaning you look at the qubit's T1/T2 (the decoherence times, or how long those qubits remain in a functional state) and add your own noise models. From that point on, you really have a virtual machine — a system that you can start submitting circuits to, and you can even look at how to develop quantum error correction strategies on this specific system.To your point, we have a few vendors that we're working with. We also have control electronics manufacturers that are part of this ecosystem we've built, and we keep adding more modalities and more companies to the Constellation Platform.Konstantinos Karagiannis: So, I want listeners to understand a little bit about what kind of simulation this is. As I mentioned with the whole state‑vector concept, as I've said numerous times on the show, 50 qubits is the limit — the universe doesn't let us cross that. When it comes to having state‑vector simulations of qubits, it was just recently verified on a supercomputer that you would need two of them to go to 51 qubits, for example. So, what are you doing that's different? How are you approaching this? If you go all the way back to Feynman, the idea is that you need a quantum machine to simulate reality. Now, it sounds like you're using a classical machine to simulate the “quantumness” of a quantum machine, which is trying to simulate reality. So, it's a bit of a Russian nesting doll. If you could just explain that a little bit for listeners.Izhar Medalsy: You captured it very nicely. When we look at the landscape of quantum computers and consider how to accelerate the development and maturity of this industry, it’s clear that we need tools that allow us to understand and virtualize systems before we fabricate them. We need the freedom to experiment with different parameters, connectivity schemes, and quantum error correction strategies, among others. In other words, if we don’t adopt the tools that made classical computers what they are today, we will be rate-limited in how quickly we can achieve functional, fault-tolerant quantum computers.Our mission began with understanding this limiting factor and directing all our efforts toward solving it. To do that, you need to do exactly what you described: examine the quantum system in front of you, derive it from first principles, and run it on a classical computer — meaning, solve those quantum equations to describe your quantum hardware on classical systems, leveraging large-scale computing infrastructure at scale.Now, the computational cost, as you pointed out, increases significantly depending on the approach. If you take a brute force approach — solving the density matrix and everything your system comprises, including environmental influences — you’re looking at a complexity of four to the power of n. So, every qubit you add increases the computational cost by a factor of four. If you use a trajectory-based system, the complexity is two to the power of n.We’re using a new method that allows us to focus on the areas in the dimensionality space that actually need to be solved. This reduces the parameter space to about 1.2 to the power of n, and sometimes even below that — resulting in a significantly lower computational cost. With this capability, we’re able to strike the right balance between simulation accuracy and scale, enabling us to break the 50-qubit barrier you mentioned.In fact, we’ve demonstrated that we can go far beyond, reaching distance-7 quantum error correction scale. This is a project we’re working on with one of our partners, and it allows us to address all the relevant questions of interest — from hardware to the most advanced quantum error correction strategies — using this new computational method. All of this stems from decades of research by Professor Daniel Ledar, one of my co-founders and a world expert in developing solvers that enable quantum systems to be solved at scale.Konstantinos Karagiannis: We had on Quantum Rings quite a while back, and they were able to reproduce the Google 2019 advantage experiment using tensor networks and some approximations, running on a laptop for two and a half days. This is not tensor networks, obviously. You’re using things like — well, I admit, I have a little inside knowledge here because I got to see a demo of this — so this is real, listeners; I actually saw it working. You’re using something called stochastic compression, essentially, to minimize the loss of information while also reducing the data set required. That’s one of the shortcuts taken. All this is also taking into account running it on the cloud and utilizing as many GPUs or whatever hardware resources you have available. Is there a real practical limit to how many physical qubits you’re able to simulate?Izhar Medalsy: Yes, there is. The idea here is not that we're here to replace quantum computers; to your point, we are here to provide the tools to accelerate the development of quantum computers. We think that you can probably push beyond 100 qubits, but at this scale, you're looking at something very interesting. From the work that Google and others have done, we know you need in the range of a few tens of physical qubits to get one logical qubit. Once you have the ability to simulate a large enough ensemble of physical qubits that represent one logical qubit, you can do something very interesting: you can take the noise traces of your physical qubits into your logical qubits. From that point on, you're actually scaling the logical qubits. You have those logical qubits that you can now simulate, and they retain the memory of how you composed them from physical qubits. So now you can look at, say, 20 or 30 logical qubits with some noise traces. That's also a large enough system to ask most of the questions that are of interest when you're looking at what the industry is now gearing towards — hardware-aware quantum error correction strategies and algorithms. One of the things we know is going to stay with us, because of the nature of these quantum systems, is that they will remain noisy. Even if we reduce the noise over time, the fundamental nature of quantum systems is that they're unstable and are statistical analog systems. The ability to bring all the traces of the hardware into your simulation environment and provide that as a tool — one that allows you to take the application and see how to best execute it on specific hardware — is something we see as very valuable, and obviously, our customers do as well.Konstantinos Karagianni: So, you anticipate evolving this platform within the next year, let's say, to have end users utilizing the number of logical qubits that are appearing on these machines, correct?Izhar Medalsy: Yes.Konstantinos Karagianni: If IBM says, “Hey, use 20 logical qubits today,” your aim is to simulate that perfectly so you can still achieve the same benefits, create a circuit, and anticipate how it will actually perform in the real world before investing money to do so. Is that the idea?Izhar Medalsy: Exactly. And, you know, this segues into AI; obviously, one of the factors hindering the development of AI to accelerate quantum is hardware accessibility. If you have at your disposal an extremely realistic simulator that allows you to see — and this is a very important detail — the time evolution of your quantum state, you gain significant insight. You mentioned previously tensor-based, gate-based simulators; they look at timestamps, not at the evolution of your system as your hardware evolves. These simulators certainly have their use cases, but if you want a system that truly represents your hardware, you need to observe the time evolution of your system.If you have access to an infinite amount of these virtual hardware resources, you can ask questions in a very intelligent way. Now, you have a platform that addresses the industry’s need for data. If we consider the next stage of quantum computing’s evolution as generating the data that will propel the industry forward, one of the most effective ways to do that is through digital twins. These digital twins allow you to build context-specific data tailored to the problem of interest.If your focus is on creating the best physical gates for your qubits, obtaining a realistic representation of how those qubits work and interact — with the world and with other qubits—will accelerate development, because you are no longer dependent on the cycles of fabricating those qubits in the lab. On the other hand, if you are interested in new or exotic quantum error correction techniques, such as QLDPC or other emerging methods, you can devise your own connectivity schemes and run them at scale, many times, to train models and address those questions.This is where we see complementary capabilities accelerating hardware and full-stack systems — through the ability to pinpoint the layer of interest and provide very realistic simulation tools, much like classical systems do today to accelerate the development of GPUs and next-generation CPUs, etc.Konstantinos Karagiannis: So, you hinted earlier that with simulation, you can do something that you can't do in real life — namely, you can pause computation and observe what's happening in the middle. I assume some of that is simply a benefit of how your code runs. But would this also help in the development or improvement of these physical machines, like, for example, those produced by IBM?Izhar Medalsy: Yeah. So, we do see that one of our customers is using those realistic simulations to do exactly that. For example, take a process that is very common in superconducting qubits: crosstalk. This means you have two qubits, and unfortunately, when you're performing an operation on one of them, the other one senses it. They might all be on the same line, or crosstalk occurs simply due to the physical proximity of one qubit to another.How do you simulate something like that without going to the fab and repeating the process again and again until you can understand the best working conditions for those qubits? If you're able to control the crosstalk and the working parameters of those systems, you can create a feedback loop where you find the best pulses for optimal working conditions and develop the best strategies to eliminate crosstalk. By applying optimization algorithms that we've developed, you can then proceed to fabricate the next generation of your QPU, which will have better performance than the current one.Konstantinos Karagiannis: Yeah, I got to see the work you did with Shor's algorithm to avoid crosstalk. If you look at what the IBM circuit does on its own, it's not quite getting the two numbers correctly, but your solution was able to get them right 99% of the time. Of course, they're small numbers for now, but now we know how to factor 21, thanks to you. Haha.Izhar Medalsy: So, it's interesting — you know, 21 is still kind of the benchmark. But I can share with you that we're working on factoring larger numbers and actually doing it to the full extent of Shor's algorithm, which still hasn't been done so far.But going back to your point on optimization, the idea is that you can now think about connecting the dots between the application layer and the hardware layer in a very informed way because you can control all the parameters from top to bottom.One of the things we were able to demonstrate to you in our previous conversation is the fact that not only are we giving you those very realistic simulation tools, we're also giving you tools to address noise at every step in the stack.For instance, one of the technologies that Professor Daniel Lidar is very well known for is error suppression. Error suppression is the ability to insert mid-circuit pulses to address issues that arise from the fact that qubits don't like to sit idle. When qubits sit idle in a quantum algorithm, they start to get out of phase and drift. But if you provide pulses that keep them in check while they're idle, you're able to significantly increase the performance of the circuit.The ability to see how those techniques work — like we showed in the simulation — and then implement them on the hardware just reduces the time it takes to achieve this benefit significantly, and obviously the cost, but it also allows you to iterate and find your own unique strategies based on your specific needs.Konstantinos Karagiannis: Yeah, they're all important aspects when it comes to scaling. Error correction is like everyone voting on whether the answer is 0 or 1, and error suppression is saying, "Hey, you get back here, we need you to vote." Exactly, that's exactly how I think of it. That kind of ties it up.So, there's a lot of AI for quantum going on here — a lot of this is powered by that. But what end users might also see is some of the AI programming interface. I guess you're using Claude to allow for some level of, "Hey, help me out, I'm having a problem here." So, can you just talk users through what they might see and how they might experience that?Izhar Medalsy: Because we had the privilege of building our system in the age of OpenAI and Claude, etcetera, we really went through the effort of making this platform AI-native and allowing users to benefit from this quantum copilot at every step of the process and the research they’re involved in.For instance, imagine you’re sitting in front of the quantum hardware, and your job is to make sure that it’s working at its best performance level. You run a few calibration steps and now you have a result that you’re not sure about. You can pick up the phone and call your supervisor, hoping they’ll know the answer, or you can upload the result to our quantum copilot, and it will give you guidance on why this experiment didn’t work and ways to fix it.Obviously, the next level is that we will write the code for you that is context-specific because we are so intimately connected to the hardware. We know what your QPU is, what your control electronics are. We have this digital twin—and that’s an AI term. We have the ground truth for your system because we’re able to simulate it from first principles.Now you have a very informed system that communicates with the AI and with the end users and gives you this guidance. On the other hand, if you’re interested in running circuits or applications and want to understand the best way to improve the performance of a specific circuit, we can suggest, through this copilot, the best error suppression or error mitigation technique for your needs.So really, the ability to use those—whether it’s cloud or others—in a context-specific environment saves time and allows more users to enter this field and start working in the quantum environment.Konstantinos Karagiannis: So that's copilot with a lowercase "c," right? You don't actually mean Microsoft's Copilot.Izhar Medalsy: No, no.Konstantinos Karagiannis: Yeah, it gets confusing.Izhar Medalsy: It is. They kind of own this name, but it's a very nice name, so you're kind of tempted to use the material.Konstantinos Karagiannis: So, it's Claude wearing a leather cap and goggles, haha.Izhar Medalsy: Exactly.Konstantinos Karagiannis: So, you add it to the context base, right? For Claude, to make it more aware of what's happening in this quantum world here.Izhar Medalsy: Yes, yes. So, one aspect is the context; the other one is the ability to fact-check some of the results with your digital twin. That demonstrates how having at your disposal a virtual machine representing your hardware — your specific hardware or configuration — and the combination of LLMs, really works very nicely together.Konstantinos Karagiannis: Just so listeners have a realistic expectation, the idea here is that this is a tool for professionals in the industry today who are doing some kind of quantum coding. They want to be able to save time and money, rather than running the machines first, and get better results. Hopefully, and eventually, maybe this will also lower the barrier for entry for people who want to learn how to quantum code, given time. Is that the idea? Will there be approaches for welcoming in people who've never even run a program or a circuit before?Izhar Medalsy: Absolutely. That’s one of our goals. It’s clear that the industry is consuming more and more talent. Obviously, quantum talent is scarce, and using those best-in-breed LLMs, etcetera, can help. But it’s a journey. I don’t want to create the false expectation that someone who is completely outside the field of quantum can migrate overnight. However, the ability to communicate in plain English and articulate the problem in front of you, rather than immediately needing to code it, is something we can enable today. That, on its own, already allows us to lower the barrier for different users to explore and work on the system.I’ll give you a very concrete example. We visited one of the largest labs in Japan for superconducting qubits, and you can see on the rack three different control electronics manufacturers. We know all of them and are good friends with all of them, but each one of those control electronics manufacturers — these are the boxes that control the real hardware — has its own software environment. If you write code in one environment, it doesn’t translate to the other. That, on its own, is a really large barrier for people experimenting with those systems. So, the ability to easily translate or move from one platform to another is one of the big benefits you can think about.You can take it to the next level, where you’re saying, “I’ve developed this code for IonQ. Can I just submit it on IBM?” Well, the reality today is that it’s not that true or not that easy, but the ability to use LLMs and some context-aware translational layer makes this process much easier.Konstantinos Karagiannis: Yeah, that will be huge in the future. If you want to try coding once and running it on multiple machines, I could definitely see a major use for that. There are benchmarks and other tools that let you see which machine performs better. But how cool would it be to just say, "Okay, I wrote this circuit and now I want to run it on five different machines"? That would be incredibly helpful.Izhar Medalsy: We can take it even to the next level and ask ourselves, "I have this application — what is the best machine to run it on? Can you compare them? First, show me virtually which is the best platform, and then I'll be in a more educated position when investing my time and money on the real hardware." This approach is a very complementary and synergistic way of working with the right hardware based on the problem at hand.Konstantinos Karagiannis: Yeah, that's like a snapshot-in-time benchmark, and it lets you be very agnostic.Izhar Medalsy: Exactly.Konstantinos Karagiannis: I love that. That's great. And for folks listening who are coding and want to give it a try, the show notes will include your site, and they'll be able to sign up for early access, right?Demo, and — yeah — the cost savings are pretty significant. I mean, if you run shots for an hour, that's thousands of bucks. And this could be, I guess, like tens of dollars. So, yeah.Izhar Medalsy: And you know, another thing about digital twins and realistic hardware simulations: on real hardware, you need to run thousands of shots because you need a statistically significant number of experiments to represent where the system is converging. But if you're solving from first principles in one simulation, you have the full informational space of your solution — you have what is known as the density matrix of your system. So, one experiment is enough. This also saves time.Konstantinos Karagiannis: Oh, that's great. That's great.Izhar, thanks so much for coming on. Looking forward to seeing how this evolves over 2026 and what it enables coders to do.Izhar Medalsy: Thank you for having me. It was a pleasure.Konstantinos Karagiannis: That does it for this episode. Thanks to Izhar Medalsy for joining to discuss Quantum Elements and thank you for listening.If you enjoyed the show, please subscribe to Protiviti’s The Post‑Quantum World, and maybe leave a review to help others find us.Be sure to follow me on all socials @KonstantHacker — that's "Constant" with a "K" —Hacker. You'll find links there to what we're doing in quantum computing services at Protiviti. You can also DM me questions or suggestions for what you'd like to hear on the show.For more information on our quantum services, check out protiviti.com or follow Protiviti on X and LinkedIn. Until next time, be kind and stay quantum curious.