Transcript | Under-Display Cameras Developed via Quantum-Inspired— with OTI Lumionics

Quantum-inspired approaches are powering real use cases for businesses today while we wait for more powerful quantum computers in the future. Some of these techniques result in physical products you can hold in your hand. Find out how OTI Lumionics is using this type of quantum computing to bring under-display cameras and other technologies to market today. Join Host Konstantinos Karagiannis for a chat with Scott Genin, VP of Materials Discovery.

Guest: Scott Genin from OTI Lumionics

Konstantinos Karagiannis:

Quantum-inspired approaches are powering real use cases while we wait for more powerful quantum computers. Find out how one company is using them to bring under-display cameras and other technologies to market today in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.
Our guest today is the VP of materials discovery at OTI Lumionics, Scott Genin. Welcome to the show.

 

Scott Genin:

Thanks for having me on.

 

Konstantinos Karagiannis:

This is going to be a slightly different episode because it looks like we’ve got some quantum that led to some real-world product ideas. I know your company actually built some things. There are going to be some cool avenues to explore here today. But first, I’d love to hear about how you ended up in your role and what aspects of your journey led to you being involved in quantum too, particularly.

 

Scott Genin:

Prior to joining OTI Lumionics, I was working in the biotech space, biopharma, where I was effectively doing experimental design and statistical processing and, in some cases, automation, like setting up machines to do automated processes — in our case, media formulation and growth conditions for various stem cell and other stem cell¬based derivatives. But a lot of it was, how do you automate machines to set up hundreds of thousands of experiments in one shot? And then how do you do the statistical? And we never called it machine learning back then. It was pretty much just the NIST Handbook — using the NIST Handbook and using those guides to determine what are optimal conditions. I’m sure in some cases that flies as AI and machine learning now. I still don’t consider that.

But after doing that for a while, I was realising it was like, “You don’t need that much mathematics and statistics.” I started looking around to see whether there are other, better fits for me. I got reconnected to Michael Healander, who I had met during my Ph.D. I had just started my Ph.D. at the University of Toronto, and he was finishing up his postdoc. And so we got reconnected and he said, “Why don’t you come and join OTI to do some types of process controls.” Eventually, I decided to join. At the start, it was just, do standard quantum chemistry simulations, standard molecular dynamic simulations, and design of experiments to make new chemicals. We were very successful at that using those initial methods.

But then the opportunity came up through the Creative Destruction Lab to start working with a quantum computer. Michael Healander came into the office one day when there were just, like, 16 of us and he said, “Would a quantum computer help you do quantum chemistry better?” And I was, like, “I don’t know. Let’s go find out.” That’s the starting point of our journey, and it’s been now almost a six-year journey doing quantum chemistry for quantum computers and whether we can extract any sort of value from that, whether we can mimic a quantum computer just for doing quantum chemistry. But it was that moment, and thanks to the Creative Destruction Lab for coordinating getting some preliminary quantum resources.

 

Konstantinos Karagiannis:

The answer is always, can a quantum computer help? It depends on what quantum computer you’re talking about and where we are in the timeline right now. We should zoom out and then zoom back in into the little pieces. Overall, how would you describe OTI Lumionics to listeners?

 

Scott Genin:

OTI Lumionics is a materials company. We have large amounts of staff and resources dedicated to testing, developing new materials, maintaining pilot production lines and advanced laboratory equipment. We predominantly focus on the display and semiconductors industries. Our chemicals are shipped to companies that make OLED panels. But we also have a lot of expertise in chemical synthesis for these materials and in chemical simulations.

But the company is not just exclusively a software company. We do make software. It is pretty much all for internal consumption preliminary, like at the start. Sometimes our customers will ask for it. We negotiate with them on that. But our first and foremost is to demonstrate a materials discovery process for industrially relevant and production-ready chemicals, which can only be done when you can in-house test — be able to competently do in-house testing at a pilot scale. Otherwise, you’re just proposing hypothetical materials that probably won’t work in reality.

 

Konstantinos Karagiannis:

As I said earlier, that’s why I wanted to have you on — to get an analysis of this approach, what’s it like to use these techniques and then still be putting something out into the world that’s not necessarily quantum as the result. Because our goal is to have this permeate all industries one day and impact real use cases. We’re going to dig into what you do with actual quantum software, but when you talk about displays, real high-level, what kinds of technologies are they — like, where the cameras are behind the display glass, that kind of thing. Is that what you working on?

 

Scott Genin:

Our mainstay product line is the cathode-patterning material. It allows the cathode to self-assemble, and in such a way, you can create, at the micro-feature level, little areas where the cathode is not. The cathode is one of the biggest contributors to light absorption. If you wanted an under-display camera, to get rid of the notch or to get rid of the punch hole, you still have to contend with that the cathode on the OLED screen is going to absorb most of the light.

And this is even a bigger problem for the infrared sensors. Cathode loves to absorb pretty much 90% of the infrared wavelength. So if you wanted your face ID to work with an under-panel sensor, it’s only ever going to work with the cathode-patterning materials, because other cathode options are not. They all absorb in the IR space. Switching alternative cathodes is not an option.

That’s predominantly what our mainstay products are. We also dabble in developing some simulation tools and exploring esoteric phosphorescent and fluorescent materials. Sometimes they have OLED applications. Sometimes they have strange medical applications as well. That’s predominantly, though, fairly small R&D projects. Our mainstay product we ship is cathode-patterning materials.

 

Konstantinos Karagiannis:

This is a real thing in the world, and everyone hates notches and punch holes. I mean, I know I do. I can’t stand it. You’re watching something. It’s, like, “Look, there’s a nice black bar in the middle of it forever.” How does quantum figure in? Now let’s start to zoom in and talk about what types of advances and processes you’re implementing to improve these materials in the real world.

 

Scott Genin:

As you can imagine, with a material so sensitive or important with light, the material has to be transparent, and it has to have a very particular refractive index. Those are two properties that are directly related to the electronic structure of a chemical, which is a fancy way to say, where are the electrons? What energy do they have? What’s the energy relative to each electron?

This is fundamentally a quantum phenomenon. Electrons behave as if they are wave functions, and they are simulated as such. And this is where quantum computing starts to become quite relevant. Quantum chemistry is a field that’s quite old, since the 1930s. But the inherent challenge with quantum chemistry is that some of these molecules are quite large, and they could have a metal center or they could have non carbon elements — second-row or third-row elements. These start to introduce complexity into these electronic structure calculations.

If you were to just look at the standard prescription from the conventional standpoint, it would be, let’s use density functional theory on this. Density functional theory (DFT) is a very powerful tool. But when we start talking about molecules that deviate from standard practice or standard phenomenon, density functional theory all of a sudden becomes, I’ve switched out a couple of atoms here and a couple of atoms there, but now I need to change my DFT functional. All the computational guiding principles you’ve gotten from using training or developing models that use one DFT functional, now, since you’ve had to switch, those models have to be thrown out. They’re no longer reliable.

And that’s one of the caveats with density functional theory: Because of this problem where you have to select the DFT functional, it’s not reliable. You find study after study after paper after paper where research groups will post their favorite density functional, and they will cherry-pick examples their density functional is good at, and all of a sudden, now it becomes, from the industrial supply, it’s, like, “I can’t just cherry-pick my examples. I need a true ab initio method.”

With true ab initio methods, the only one which effectively works is something called DLNPO coupled-cluster singles and doubles. This is a fancy word for a post ergo propter hoc ab initio method. In our case, it doesn’t work because the core assumptions that allow coupled-cluster theories to work don’t apply in our case. You end up with somewhat unreliable results. The question is, can we use a quantum computer? Hypothetically, it should be better at solving these quantum chemistry problems at better scale. Or, more accurately, can we use this quantum computer and the theory that underpins quantum chemistry on a quantum computer to do more accurate calculations?

 

Konstantinos Karagiannis:

The computer you chose, if we’re talking about optimising the placement here, it seems to sound like you’re talking annealing.

 

Scott Genin:

We use an in-house simulator because it gives us scale. Quantum chemistry can translate onto an annealer. We’ve done a lot of research around that. It’s just that the main reason the annealer is quite powerful initially is that when you look at the standard methods for doing quantum chemistry on a quantum computer — I know we’re getting to the nitty-gritty, so if there’s any questions, feel free to ask — but you have quantum phase estimation and variational quantum eigensolver. These are the two core algorithms. Now you can get into machine learning, but machine learning has its own problems, where you now have to get reams and reams of data, which you don’t have, and then you’re just going to dump statistical noise onto a model that already has bias.

Now we’re back at square one with DFT, and if you’re back at square one with DFT, you might as just use DFT. The conundrum is that with quantum phase estimation, the quantum computer must be maintained in coherence on the order of days. And this isn’t my research that predicts this. This is research that comes out of Microsoft. It comes out of many of the quantum computing companies. You can look at some of their publications in high-profile journals where they’re saying you need to maintain coherence on the order of hours and days. And that’s not something an actual quantum computer has done.

With variational quantum eigensolver, you have this sampling problem. We have to keep repeating the measurement process on a circuit to measure the individual components of the Hamiltonian. While this is a variational process, which is good — it’s what we like, because variational processes have a bias we understand and can account for — he inherent problem is that what people don’t like to mention is that as the qubit size increases, the number of samples you have to do increases exponentially.

All of a sudden, yes, we’ve run systems like water and hydrogen. We were one of the first groups to do an eight-qubit — technically larger than what IBM simulated on their quantum computer. About a year afterward, we were able to use Rigetti’s quantum computer with minimal interference or adjustments. And we were able to simulate water using an eight-qubit water — bigger than their beryllium simulation. It’s still tiny, in retrospect, and it has already been surpassed by Google doing a 20-qubit chain of hydrogen simulation. But in terms of with VQE, the runtime increases dramatically.

But one of the things the D-Wave is very good at is, the D-Wave can do that variational sampling almost instantaneously with minimal overhead. You can get an exact answer if you can translate it into the Ising problem. And we can translate our problem into the Ising problem. Eventually, we figured out how to mimic this on a classical computer so we can just directly compute expectation values without having to sample.

But it’s only applicable to our system of quantum chemistry because without our patented theory we developed — the qubit coupled-cluster theory, which is has spawned its own branch of quantum chemistry theory — because the gate sets have a certain form, they’re quite easily simulatable on a classical computer. They have peculiarities of how we can decompose them, how we can represent them in classical memory, that allows us to directly calculate variational expectation values without having to truly sample anything. And that’s one of the core advantages we’ve just developed over the course of the six years.

 

Konstantinos Karagiannis:

Normally, when you benchmark annealing, you do, let’s say digital annealers, you do simulation on a classical machine and then usually D-Wave as the standard for the annealer, and then sometimes you toss in their hybrid, although with their hybrid solver, we don’t know what percentage is quantum. Did you look at some of the other choices too before settling on classical emulation, or did you try digital annealing to see what kind of performance you get out of that?

 

Scott Genin:

We’ve tried everything. When you have your own in-house simulator, you save a lot of money. Obviously, I’m accountable to our company financially and to our investors, and I’m getting the same number from our simulator versus what the digital annealer or D-Wave can produce. People like to beat up on these annealers, saying classical computers can do it, but fundamentally, they do produce an answer — and they can produce it, in some cases, faster than what we can do. But on a cost basis, there’s not much competition at that point. We have ours completely on demand whenever we want it. We don’t sit in a queue. We pay fractions of pennies to get an answer. At that point, economically, it makes sense for us to keep it in-house.

 

Konstantinos Karagiannis:

The 24/7 thing — that kills a lot of use cases. Fraud detection, for example. They’re, like, “We don’t want to wait online or run it once a week. We want to have this thing going.”

Can you tell us about your simulated annealer? It’d be interesting to hear how it works.

 

Scott Genin:

It’s not technically a simulated annealer. It’s technically a quantum emulator. We have it all geared to do quantum chemistry on a quantum computer, so it uses the same mathematical principles a quantum computer would use. The argument about using quantum computers is that with a quantum computer, the amount of RAM to simulate scales exponentially, 2n. Through our encoding methods, we’ve been able to bring that down to n4, which is much better than 2n. We are doing simulations up to 250 qubits — now, these things do have a lot of RAM. They are quite big machines. On a desktop, you can simulate about 70, 80, 90 qubits. When you start reaching 250 qubits, you need at least about 1 terabyte of RAM just to make sure the simulation goes to completion.

But our development path has now taken the turn where we don’t necessarily see us, at least in the two- to five-year term, competing against quantum computers doing quantum chemistry. We’re now starting to compete against the conventional methods like coupled-cluster, singles and doubles, DMRG, DMET. That’s where we see our true competition — these very advanced methods. In our case, we do have a method that is quantum-inspired. It has all the advantages of being of running on technically a quantum computer. We can run it classically. There’s a small cost to that in terms of relative to what a theoretical perfect quantum computer would ever be able to do.

But we now see this as testing against very well established quantum chemistry benchmarks, and we’re able to outperform things like DMRG when it comes to that our method isn’t all well-rounded method. It deals with DMRG in particular, even though researchers have posted that in theory, a quantum phase estimation for a sparse Hamiltonian would never technically outperform DMRG.

The theory works out — the analysis is correct. The conundrum that you would have to then throw back is, is every molecule a sparse Hamiltonian? The answer is no. If you ever deal with real molecules, like the OLED molecules we deal with that we translate into Hamiltonians. They are not sparse at all. They’re quite complex because once you start getting into trying to have a quantitative value as opposed to a qualitative value, DMRG, or you have a molecule that’s not a perfect chain, a linear chain of atoms, you need a well-rounded, high-quality method that doesn’t have implicit assumptions in it. That is where our coupled-cluster combined with our simulator handles things very well. It’s able to plow through any type of system.

 

Konstantinos Karagiannis:

When you’re thinking about scaling this in the future for other projects, how do you decide how much to invest in this going forward as to when you predict some tipping point where it should be quantum hardware. Do you have a plan — like, next year you’re going to double its horsepower or something in anticipation of a still gap before the time when quantum is ready to step in?

 

Scott Genin:

The code is run on parallel. The roadmap is, can we get this onto GPUs? We do need very specialty GPUs to get this to run, because you need very high data transfer but low latency between the GPU itself and the RAM, because GPUs don’t have enough RAM. We’re still talking about needing hundreds of gigabytes of RAM. And there are not too many GPUs out there that have a terabyte of RAM. But they are connected to RAM, and they can have low latency in terms of accessing that RAM. That is where we’re going to. Hopefully, we can get not just a 2x speedup, but we could perhaps get like a 5x or 10x speedup in our computations. From what we published back in 2021 to where we are today in 2023, we have about a 10x speedup overall in terms of our computations from our regular benchmarking systems.

I’m not as certain anymore about when a good quantum computer will come out. I don’t think quantum computers doing variational quantum eigensolver themselves will be competitive. I still think quantum phase estimation does pose hypothetically the best alternative to what we’re doing. When quantum computers get better, people will be able to explore more what quantum phase estimation can deliver. Quantum phase estimation still has a lot of potential in terms of advancing the state of the art.

The other thing we invest in is trying to develop completely alternative methods doing quantum chemistry on a quantum computer, or variational quantum eigensolver. But in order to get to where people think quantum phase estimation will start working for quantum chemistry, the current best estimates are, you need 2,844 logical qubits that are all fully connected, and you need gate operators that are on the scale of nanoseconds — 5–10 nanoseconds. And you need, effectively, true fault tolerance, not just error correction. This thing can’t just have its errors corrected. It needs to be perfect, near perfect.

And it’s just because the way that the Hamiltonians you generate for quantum chemistry are encoded onto a phase estimation algorithm, unless your qubitisation or trotter step is going to infinity, you’re going to have some error. You’ve already taken a trade-off. And that’s OK. You can quantify that trade-off, but these circuits are not small. These circuits are incredibly long. It’s cool that we’re seeing quantum computers at 400 qubits. But for quantum chemistry, it’s not enough, especially since variational quantum eigensolver will not get us to the end. You need quantum phase estimation or something completely new.

Until we see something convincing that’s completely new via quantum phase estimation, quantum phase estimation is going to be the best hypothetical roadmap. And at this point, even in 2029, Google says, we’ll finally have a quantum computer that will have good fault tolerance and error correction. Maybe 2029 is when we would expect something. But there’s another six years between now and 2029. I’m not just going to sit around and wait for that.

 

Konstantinos Karagiannis:

You’re making a couple of points I like to make lately when I talk to people about this. Quantum-inspired right now is the way. If you want to do actual ROI right now, it’s definitely the way. And it’s exciting when you can apply it to a real-world product like you guys did, which is terrific.

There are competitors coming out now for Nvidia. They’re starting to play in that space of alternate paradigms, of doing more on chip close to the GPU. It might be that your next design does take into account one of these alternative pieces of hardware — not that I’m exactly sure what’s in it right now. You’ll benchmarking and deciding as you go along. Is this platform easily adaptable to a future project you want to do? If you have anything on a roadmap, are you coming out with other types of products that are going to take advantage of these quantum-inspired approaches?

 

Scott Genin:

We received a government grant to explore whether this could be used to design nonplatinum fuel cell catalysts. DFT does an OK job, but again, because they have large swings in what atoms are binding to each other, DFT functionals have to constantly be substituted out. This is an area where we’re actively exploring because IGCC is more neutral in how it treats chemical systems. We’re looking at that in terms of whether it could be used to design a new fuel cell or a cheaper fuel cell catalyst.

 

Konstantinos Karagiannis:

I wanted listeners to realise that next time they look at a phone or tablet and it doesn’t have a hole or a notch, and one day, potentially, a different fuel cell, it could have been quantum-inspired that got them there. That’s definitely worth noting that this stuff is real right now, especially while we’re waiting for the machines to reach fault tolerance. Were there any other things you wanted to leave our listeners with about the company or your plans?

 

Scott Genin:

Quantum-inspired is a very useful exercise. Even though we’re not using a real quantum computer to do our computations, just by going through the exercise of trying to push the scientific boundaries, we’ve learned a lot about conventional quantum chemistry, stuff we probably wouldn’t have learned unless we did these exercises. In quantum chemistry, there’s a bunch of internal parameters you can select to build your chemical system — things like basis-set selection. We’ve learned more about how basis sets are influenced by correlation methods, which is fancy lingo to say electrons interacting with each other directly. How do basis sets profoundly affect that? More so than you would probably learn in a course or even just by doing density functional theory. It’s an important journey and process.

But I’d also like to make the additional emphasis that in order to conduct what we’ve done, where we can use a quantum-inspired method to develop new materials, you have to be very good at materials discovery. Some people, they’ll always ask me, “When will a quantum computer ever help design a material?” And I say, “It’s either going to be now, which is today, or probably about a year ago, or it’s going to be infinite out into the future.” Depending on whether humans become a spacefaring civilisation, that could end a couple of billion years from now, or it could end even much further than that. But fundamentally, if you can’t design a material normally with computational tools, a quantum computer is not going to bail you out magically. And that’s an important emphasis.

The thing I like to emphasise is that to be very good and disciplined at materials discovery — it’s one of the core aspects OTI has — is because we are a for-profit company. We’re not subsidised by the Canadian government to the extent that some other chemical companies are where they can just take a thousand chemists and throw them at a problem.

A lot of chemical design from some of the biggest chemical companies in the world is still, they take a thousand chemists and they throw them at the problem. They go and synthesise 100,000 compounds, and they cross their fingers that one of them works, whereas that company could be using conventional computers today to design compounds, but in most of the cases, they don’t. And it’s because there’s a lot of corporate and structural problems and a lack of understanding of what materials discovery is. A lot of people think the quantum computer is going to bail them out of doing actual materials discovery, or the best one is, “AI is going to bail me out of thinking about how to do materials discovery.”

It’s like what the CEO of Schrödinger said: “Machine learning and AI is old news in materials discovery.” Robotics, AI, machine learning, it’s old in our field. It’s surprisingly old. Seven, eight years ago, I was setting up robotics to do stem cell — technically, materials discovery for stem cells. Roche sells a wonderful and very vibrant array of all these wonderful robotics that will do, effectively, materials discovery for you. It’s old hat.

A lot of people who are banking on materials-discovery startups, if you’re thinking about materials discovery, it’s a vibrant field because it’s constantly changing. But the only true demonstration of materials discovery is getting something into a customer’s hands. Having an algorithm that coughs up little doodles on a piece of paper is not materials discovery. It’s just doodles. It’s important to emphasise that for us, the quantum computer has been so helpful at understanding that materials-discovery process and understanding what those simulations do and what that means so we can get a chemical into someone’s hands.

 

Konstantinos Karagiannis:

When we have a fault-tolerant quantum computer, all your learnings along this journey are completely transferable. You’re already there at the cutting edge with that.

I enjoyed this chat. I wanted our listeners to hear what your customers are getting out of this and how this is turning into real-world products. These techniques are real.

Thanks so much for joining.

 

Scott Genin:

Thank you for having me on.

 

Konstantinos Karagiannis:

Now it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap.

OTI Lumionics develops advanced materials with a focus on different display technology. But it’s how they’re doing it that’s of interest here. They use quantum and quantum-inspired techniques to make their materials a reality. Today, the company’s main product is cathode patterning material, or CPM, which enables cathodes to self-assemble. At the micro level, this method allows for the creation of tiny areas where the cathode is not present. As a result, light of different wavelengths can pass through from the outside in, enabling not only under-display cameras but also ones that support IR needed for face-unlock features of phones.

Achieving this type of material is a tricky process, a blend of achieving transparency in spots and maintaining a particular refractive index. These properties are related directly to the electronic structure of the chemical. Quantum chemistry is used to solve for the location of electrons and their energy and the energy relative to each electron. The system doing that difficult math is a quantum emulator put together in-house at OTI Lumionics. The company is also working on potential new, hopefully cheaper, fuel cell catalysts.
As I’ve said before, quantum-inspired algorithms and hardware are having a big impact today and providing real ROI while we wait for fault tolerant quantum computing.

That does it for this episode. Thanks to Scott Genin for joining to discuss OTI Lumionics, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.

Loading...