Quantum computing is easy to access thanks to services like Amazon Braket. But how long does it take to apply one of these systems to a real-world use case? Sometimes months. Amazon partner Quantum Computing, Inc. (QCI) has launched Qatalyst API to shorten development time on some types of quantum coding … to under a week! Listen in to hear how Amazon is preparing for the future of high-performance quantum computing, and how QCI is ensuring we have the code to put those systems to amazing use.
Guest Speakers:
- Richard Moulds, General Manager – Amazon Braket, AWS
- Bob Liscouski, CEO, Quantum Computing Inc.

The Post-Quantum World on Apple Podcasts.
Quantum computing capabilities are exploding, causing disruption and opportunities, but many technology and business leaders don’t understand the impact quantum will have on their business. Protiviti is helping organizations get post-quantum ready. In our bi-weekly podcast series, The Post-Quantum World, Protiviti Associate Director and host Konstantinos Karagiannis is joined by quantum computing experts to discuss hot topics in quantum computing, including the business impact, benefits and threats of this exciting new capability.
Transcript

Amazon Braket provides an easy-to-use ecosystem for accessing quantum computers on the cloud, but it could take months to develop a working use case. Today, we’ll look at an Amazon partner, Quantum Computing Inc., and their Qatalyst API, which is designed to shorten some of those quantum coding tasks to under a week. Find out how in this episode of The Post-Quantum World.
I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era.
Today, we’re going to be doing something a little different: For the first time, I’m going to have two guests on, so we’ll see how this plays out. I’d like to welcome Richard Moulds, who’s the general manager of Amazon Braket, and Bob Liscouski, who is the CEO of Quantum Computing Inc., or QCI, as we’ll be calling them. Thanks for joining, guys.



We’ll get started with Richard. I want to talk a little bit about Amazon Braket. I’m sure my listeners know what it is, but if you want to give us a high-level about the Braket ecosystem and where you see it going within a year or so?


Currently, people can access through you guys D-Wave, Rigetti and IonQ for the most popular offerings. Do you have any other hardware surprises coming along?

Over time, watch this space: Our goal is absolutely to add different hardware providers to the platform, and it’s a big community. There are dozens of companies and research labs that are building quantum hardware. They’re interested in working with Braket because they don’t want the hassle of building web fronts and managing customer interaction and running customer support teams. Generally speaking, these companies are, as they should be, populated by physicists to build really cool stuff, so we hope to avoid a lot of the heavy lifting of trying to run a public solid service and allow them to focus on what they do best, which is building hardware.

Yes, and in a few moments, we’ll be talking about a software partner. Some customers right now, including financial services, they’re not doing many cloud workloads. As strange as it is, people think that everyone’s in the cloud, but I guess there are still some sensitive things that they try to keep in-house. Are you guys already giving some thought to how they can access quantum if they don’t already have the buy-in in the cloud space?


Yes, and from what we’re hearing, because they’re so concerned about security and IP, we’ve already started talking to customers about, we could do a proof of concept. Can we tokenize the data so it’s meaningless when it goes up there and makes them more comfortable? But at the end of the day, the algorithm itself could be at risk — maybe they came up with a unique way of solving a problem that they don’t even want to send back to the cloud. Is there anything you tell customers about security there, because you guys are way ahead of post-quantum crypto and implementing things like hybrid handshakes between BIKE and SIKE and ECC? Do you guys play up that angle to give them assurances?

Insecurity is, of course, a broad topic. It’s all about links in the chain: The weakest link can bring down everything if you’re not careful. As I said, this has been a huge part of AWS. In fact, before I joined the quantum initiative at AWS, I was on the security side managing the encryption capabilities in AWS. Clearly, we focus on, how do you get information to the cloud safely, and some of the post-quantum algorithmic work you mentioned is all about addressing our problem, dealing with the threat that a quantum computer might one day be able to expose the secret keys associated with public key cryptography, which is the primary mechanism by which SSL-TLS connections work in the cloud.
That’s a big initiative — then, obviously, initiatives to protect the physical infrastructure of the AWS environment, and then, in many ways more importantly, building in the internal controls to establish mechanisms to minimize the touchpoints that AWS staff have over their data. We invested a lot of time and trouble in compliance reporting. For example, you’ll find a SOC 2 report that we publish every six months, independently audited, where we define for each service in AWS the degree to which any internal staff have any access to customer data, whether that be a customer identifiable information or whether that is intellectual property. Big strides in terms of locking down access even within our own infrastructure, so we’re not just thinking about attack from outside. We’re obviously focused on best practices in terms of guarding access to information inside. The international initiative, like the GDPR, is really pushing the bar up in some of those regards.

Yes, and this is probably a good point to segue over into one of the partners you’re protecting and their property, so I’d love to talk to Bob for a moment here, and if you want to give us a little background on QCI.


I was really interested in your Qatalyst approach — of course, with a Q, right? It’s funny — you guys threw me a curveball with this, because, for the longest time, I would describe the quantum ecosystem as “Everything’s written from scratch.” Everything’s written from scratch. Everything takes six months to develop, and then it’s four days. Do you want to talk a little bit about that four-day process that people can experience and how they can actually get their code running faster?


Yes, and we’re going to see the first advantage with a hybrid approach. That’s what’s going to happen within this year, probably. We’re getting eerily close, especially on some of the D-Wave hybrid solutions. When you guys are running a QUBO, your back-end target is obviously going to be a D-Wave, but you also have the ability to hit the other machines.

We do, yes. Through Braket, we can access the D-Wave, the IonQ and the Rigetti and run those problems, but we can also separately run on IBM, and we’re adding others as well. Our goal is to be as agnostic for the hardware as we possibly can.

If someone wanted to access Qatalyst, they would go to Braket and then not be able to access IBM, or would they be able to still hop over?

Not through Braket. IBM is not running through the AWS cloud, but they could use Qatalyst to access the IBM machine separately, so that’s available to them. For others, however Amazon decides to bring on other QPUs, we’ll continue to make those available through Qatalyst or separately.

Yes, that makes sense. Did you guys start publishing success results — papers more geared to showcase your interaction with these problem-solvings?

The papers that we’ve been able to publish and have others published have been based upon performance metrics to show that there has been a performance benefit. We’re in the process now of collecting the use cases and demonstrating the efficacy of the approach through those use cases, but we’re not at the point of publishing yet. Candidly, some, depending upon the client, we may not publish, but we’re clearly interested in showing the validity of our techniques and the various use cases to include supply chain logistics and some risk analysis, but also, as it goes back on the supply chain, looking at doing some stress testing and risk analysis on supply chains. That covers a really wide area, so there are a lot of good places for us to spend some time.

Are any of your customers already willing to shout out to the world that they’re using this and having success, or are they being hush-hush?

At this point, I think we’re not at that end result yet, and we take a very conservative approach, so we’re not trying to hype anything. We’re not trying to get out in front of either a client or any successes we have. We want to make sure we do it right, and at the right time, we’ll be able to publish or to make an announcement that we’ve achieved some benefit there.

That makes sense. When these customers are interfacing and using this, how do they then deal with the data that comes back, the answers? It looks like you guys have built a whole suite around also delivering the results.

Interestingly, there are a couple of different types of clients that we’re talking with and we’re engaged with. They’re everything from the end user — the subject-matter expert — to the consultants that are the experts themselves that are looking to bring that capability to their clients. The results themselves are pretty straightforward — nothing magical about it. It’s just a matter of providing them the data, and, again, I don’t want to get too far out in front, but this is very much a work in progress for us. As we work with the clients, there’s not going to be a one-size-fits-all solution for either the front end that they’ll look to putting data in, or the back end, and they’ll be getting data back out again. It’s a good question. I just don’t think there’s just one answer yet.

As usual, there’s a stack? A whole stack? You guys are trying to abstract away as much of that as possible, to try to make it just as simple — so, if someone did want to go in and manipulate pulse level or something on a qubit, is that going to always be locked out, or will there be some kind of advanced interface available?

Well, for us now, we’re not dealing at the machine level, so not in the short-term road map will we ever have a function that will allow it. There’s a lot of good software out there that’s doing that already, and we’re just taking advantage of that.

Yes. The users will probably just break it, anyway.

Probably.

“It doesn’t work anymore.”


Are you guys also providing any kind of training resources for companies that want to code in this environment — tutorials and things like that?

We are. Right now, it’s at a human-touch level. We’re not doing anything online or downloadable, but we’re curating the process to be able to ensure that we understand the client. Again, we have to appreciate it’s relatively early-stage for us, so we like to have the client interaction to ensure that we understand how the clients are looking at the data, how they’re interfacing with it, so we can refine our approach. We are services-oriented, although as we build out the model here, we’re a software company — you can never do without services. You never want to do without services. You always want to have that client feedback and understand what works well, what doesn’t work well. But for the time being, the training we provide is, of course, being provided by our technicians, by our services team.

Yes, and because there’s a little bit of that at least, and probably some proficiency to be gained, I wonder if, then, companies could start using your offering to provide services to someone else, like a go-between.

Yes, absolutely. Like I mentioned earlier, part of our marketing goal is to work with other consulting shops — euphemistically, the Big Eight, now the Big Four —that have that client contact, so it’s great to be able to work with them, to train them up and service them. Then, of course, they bring the expertise to the client.

Yes. I feel we’ll be talking after this podcast. There’s a very distinct possibility.

It is. It’s great.

We’re always looking for more tools to apply. I’m interested in checking this out more.
Here’s a question for both of you guys. How close do you feel we are to advantage with both what you’re seeing in the hardware side for Braket and this fine-tune approach with APIs? Do you guys have any experiments you’re keeping an eye on that are getting eerily close? In a past episode, I talked to Sam Mugel from Multiverse, and they were able to show portfolio optimization at dramatically increased speed —we’re talking like 1,000 to 1 — but it wasn’t as accurate, so there’s that trade-off of speed and accuracy and everything. Do you guys have any feelings on what you’re seeing for advantage?

On the hardware side, I think we don’t have the complete visibility of what’s going on in the hardware space to the point where I could really answer that question confidently, so I’ll talk on the software side. The gains, as we’ve alluded to earlier — the word advantage might be a little bit too presumptuous or maybe too aspirational, but clearly, “some degree of advantage” is probably an appropriate way to put that I think we’ll be able to achieve with software. Clearly, that’s our goal. Again, I don’t want to give out too much information on things that we’re doing, but I would just say that we’re very focused on that. I think there’ll be opportunity to achieve much higher degrees of performance with software approaches than just purely hardware.

This might be a question more for Richard: Do you see anything being put in place for handling bottlenecks or access to machines? There’s still so few of them, and as they get faster and people get excited, you’re going to have everybody knocking on your door to access these things. Have you started giving thought to how you’re going to time-slice those and make them actually usable?

Yes. I think it’s important to jump on that issue. These are very scarce resources. Most of the folks that are building quantum hardware have one or two machines, and oftentimes, customers are sharing those machines with the actual physicists that are trying to build them. This is still very much an iterative phase on very limited hardware.
From the outset, we decided to build Braket as an actual production-ready platform. This is not just a showcase for one particular machine. We wanted customers, and we were told by customers that they wanted, to envisage what a production environment might actually look like — how you would control access to these devices, and how you would manage queues, and what a commercial pricing model might look like on these systems and how you might deal with issues like QA, their quality assurance. These are flawed devices right now. They sometimes generate the wrong answer, and we should be honest about that, and it’s important that we try that and we identify that to customers, and that we can manage availability. That was very much in the forefront so that as the hardware gets useful — gains an advantage, as you say — there’s a commercial infrastructure and a delivery platform that’s able to scale quickly, because as soon as that advantage is proven, there’s going to be a rush to consume this type of technology.
Obviously, when somebody in financial services or somebody in material science or somebody in pharmaceutical does something interesting, then the rest of that entire industry will be extremely interested to follow suit. I think we go from a research phase to a phase where demand might outstrip supply quite quickly.
Then the industry has to pivot. Today, it’s all about building one great machine or the next-generation machine or the next-generation machine, and claiming an improvement in fidelity or claiming greater connectivity, or whatever it might be. Quite quickly, the industry has to figure out how to build a supply chain that might have to do the build — physically build and manage a thousand of these devices and deliver them as a real infrastructure.
Yes, we’re trying to get ahead of that a little bit and trying to think about how we can envisage a production quantum computing platform, and I think that plays to some of the requirements that firms like QCI need. They’re trying to build a business. They don’t want to run their business on a collection of showcase technologies. They want to run their business on a solid AWS platform. So, from day one, we decided this had to be a first-class citizen of AWS and wherever possible had to comply with the mandates and the operational-excellence bars that we set for ourselves and the security bars we talked about earlier that apply to any other AWS service. Yes, very much thinking about that.
You said earlier it’s all about hybrid for the next few years. It’s not just about quantum computing. It’s easy to fixate on the quantum hardware, but there are a lot of other components to this machine. Trying to combine high-performance classical compute, new development tools — you mentioned PennyLane, which is the open source project for trying to orchestrate better hybrid workloads — and trying to coordinate that resource with quantum computers, where quantum computers are available selectively, whereas classical resources are obviously highly elastic and highly scalable. Trying to orchestrate those different types of resources with different types of operational profiles is not straightforward, so our goal is to make that as simple as possible for customers so that, fundamentally, the technology can get out of the way and they can innovate as quickly as possible on these platforms and, therefore, get us to quantum advantage hopefully as soon as possible.


Absolutely. That’s the lifeblood of AWS. Most of the other services are obviously growing incredibly fast, and that’s their life. Operational excellence is a mantra around it in terms of managing scalability and latencies, and it obviously follows into what we do as well. It’s changing really quickly as we evolve the deployment model for these different types of hardware, and every one is different. They have their own different levels of operational maturity. The devices have very different characteristics in terms of performance.
Even tasks like, for example, compilation are handled differently for different types of machines. The compilation itself can be a problem. Dealing with these workflow components, managing queues, just trying to make it predictable. What we hear from customers over and over is, they just want a predictable experience. When they’re ready to submit their problem, they want to know when it’s going to get submitted, when it’s going to get returned, and have some degree of visibility and control over that. It’s very much an operational experience.

Because these machines aren’t even in your data centers, right? They’re being housed and taken care of and coddled by their makers, right?

Yet, in the normal sense of the word, they are — yes, they’re typically surrounded by people in white coats, and just the calibration cycle of these devices is different from technology to technology, but that is changing. I think the hardware providers that we work with today do an amazing job. Not only are they trying to knock down huge scientific frontiers from an engineering and research perspective, but they are also working very hard on the operational aspects of their machines. They’re trying to minimize the downtime, trying to manage the availability windows that they can make available and trying to be more responsive.
When customers hit Braket — they have a problem, a lack of availability, say — we have worked together really hard to get these machines back online, and to make them available as quickly as possible. When a new device comes online, becomes available, it’s vital that we get it in the hands of researchers as quickly as possible. Switching from device to device, maintaining resiliency, conveying the status to individual customers, relaying the current calibration data from every given machine and making that available to customers so that they know how long ago was that machine calibrated? What is the current calibration data?
That stuff is extremely important to researchers. These machines are essentially unique. Even though they may have the same technology, every chip varies from every other chip. These are basic analog systems, and it’s important that developers know explicitly which chip they’re playing with and when that chip was last calibrated.

When we do get to this point where one of these machines is doing something so great that everyone wants to do it — let’s say it is running on something on Qatalyst. Will there be any kind of sharing, or is it instantly hidden away in private the minute a customer starts using Qatalyst, for example? I’m still wishing for a little bit more of a sharing thing to not stifle the industry, but I know that that’s naive to hope for in the long term. Is there any shared library or something where people’s successes — that you at least have the opportunity to share with other potential users of Qatalyst or anything like that?

Well, I think to your point, Konstantinos, I think the team that we have is very — they come out of the space where sharing these types of advances is part of their DNA, and we would very much like to be able to accomplish that where, of course, we’re maintaining the proprietariness of what we’re doing.
I wouldn’t say it’s out of the question. People that gravitate toward quantum computing have a research background. They have very much a desire to be engaged in sharing the benefits of that research to the group if they can. I would support that. Like I said, within the boundaries, I need to retain the proprietary ownership of that information. If it doesn’t meet that test, then we could go beyond it. That’s our DNA as well. We’d like to be able to do that.



That may be the case, really. The fintech industry is probably going to be the most closed, and maybe the pharma industry as well, but fintech clearly is spending a significant amount of money to get that competitive advantage. I can’t blame them — they’re going to be investing a lot of effort into it — but generally speaking, as Richard points out, this will be an industry that will be geared toward sharing.


If you’re interested in QCI, in Quantum Computing Inc., you can feel free to contact us. You can go right to our webpage. It’s Quantumcomputinginc.com. We have [email protected], and we’re happy to respond to any inquiries there.

Yes, for Braket, it’s aws.amazon.com/braket.

Thanks again.


That does it for this episode. Thanks to Richard Moulds and Bob Liscouski for joining today to discuss how Amazon Braket is bringing technologies like QCI’s Qatalyst to the masses on the cloud, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World and maybe leave a review to help others find us. Be sure to follow me on Twitter and Instagram @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also find information on our quantum services at Protiviti.com, or follow Protiviti Tech on Twitter and LinkedIn.
Until next time, be kind, and stay quantum curious.