Transcript | Quantum-Safe Cryptographic Security with Suvi Lampila of SSH Listen Cracking RSA in web traffic is primarily what people think of when they hear about the quantum threat to cryptography. But there are lots of protocols in peril, especially in a typical corporate environment. Join host Konstantinos Karagiannis for a chat with Suvi Lampila from SSH to find out how the security giant is working on securing data flows you may not have considered. Guest: Suvi Lampila from SSH Listen Topics Digital Transformation K. Karagiannis: Cracking RSA is primarily what people think of when they hear about the quantum threat to cryptography. But there are lots of protocols in peril. Find out how security giant SSH is working on securing data flows you might not have considered in this episode of The Post-Quantum World. I’m your host, Konstantinos Karagiannis. I lead Quantum Computing Services at Protiviti, where we’re helping companies prepare for the benefits and threats of this exploding field. I hope you’ll join each episode as we explore the technology and business impacts of this post-quantum era. Our guest today is an SSH Fellow at SSH Communications Security, Suvi Lampila. Welcome to the show. Suvi Lamplia: Thank you for having me. K. Karagiannis: You’ve been with SSH for over 21 years, so if ever there was a time to start with someone’s background, I’d say it’s in this conversation. Suvi Lamplia: I joined in 2001, and pretty soon, I moved into Secure Shell support, and I got exposed to some of the largest organisations in the world and trying to help them to get their things in order. And then I’ve done quite a bit of professional-services gigs so that I’ve worked on the customer side, sorting things out — a practical approach to many of these things. And then, a few years ago, SSH was instrumental in getting the PQC of Finland started. That’s a consortium of several cybersecurity companies, the government and academia, trying to figure out, what do we do with this quantum-threat thing — how do we tackle this with post-quantum cryptography? It was during this time that I got exposed to more of this quantum side of things. There was a quantum computing course run by a professor […] University that I took part in with some of my colleagues. And there, when we were playing with Qisket, with the IBM small-scale quantum computers, it dawned on me that any teenager anywhere in the world could be doing exactly the same stuff I’m doing right now. And it shifted my thinking, because until that point, I had thought that the threat will always come from some sort of nation-state or the large organisations like Microsoft or Google, which would have the resources to actually do something with the quantum computers. But then I started thinking that it can be just anyone who has that one brilliant idea, and that’s all it takes to wreck everything if they can come up with ways to improve Shor’s algorithm, because only the second step is done on quantum computers — there’s a whole lot of stuff that is done on classical computers. You make improvements on any of those sites or come up with some fantastic algorithm for quantum computers that cannot necessarily be tested right now. But you get that idea, and then it could be very quickly shifting this thing — and that we needed to move faster than what we anticipated. Fast-forward — now, I’ve been doing this in various product teams that we have at SSH, trying to guide our solutions to be quantum-safe going forward. And we have done quite a bit of work on that area already. K. Karagiannis: That’s a great point you bring up: Could someone come up with some algorithm or some change or some efficiency that makes things work better? And, of course, I always worry about, will machines accelerate faster than we think, interconnect other ways that we can get these higher cubic counts and lower errors that will be needed? I was curious — what do you think about that Chinese paper? Suvi Lamplia: Well, I heard about the news when I was jetlagged in Singapore, and I was, like, “Great, I have to start explaining this to our customers.” And then, having some conversations with some of my colleagues, we came to the conclusion that, no, I should write a blog post about the RSA algorithm, and saying that it’s a “The rumors of my death have been greatly exaggerated” type of thing. But that day will come when that news will actually be the case. And I want our customers — we have, like, over 5,000 of them, over 40% of the Fortune 500 — to be in the position that when that news comes in, they’re able to pick up the phone, listen to the scared person at the other end and confidently say that “it doesn’t concern us, because we have moved to PQC long before this affects us.” RSA is such a great example. It’s used on that Financial Times article, for instance, to explain about quantum computers. But it’s not only RSA. We have to worry right now more about Diffie-Hellman, the elliptic-curve key Diffie-Hellman, the key-agreement side of things. We need to get that sorted out. And that’s one of the gripes I have with post-quantum as a term, because it gives this illusion to people that we have all this time because it’s something that happens after. And I’m there trying to say, “No, your secrets exist right now. You have to do something right now” if they have any value in years after this time, when you’ve been transmitting them across the internet with our current encryption protocols. One of the best things I heard when we had our customer advisory board in London a few months back, we met with some of the large financial customers and their technical people. They have this morning prayer, which goes, “Please don’t break RSA before I retire.” And I love that, but I hope that I don’t have to wait until my retirement. But it’s a possibility, being in this business for decades, and I don’t know how many decades I have left. K. Karagiannis: It’s harvest now, decrypt later, obviously. […] theorem comes into play here: Your secrets have a shelf life. Then there’s the amount of time it takes you to migrate to post-quantum cryptography. That’s a lot of years. And we’re going to have a machine before then. Suvi Lamplia: I have no scientific evidence to back this idea up, but my gut feeling says that long before we get to the stage where the quantum computers — and I like to call them usable qubits, because sometimes dirty qubits will do the trick — long before we have the perfect quantum algorithm that will break, for instance, RSA, we will have something that will break the current encryption, something that will weaken it to the point where we’re going to be scrambling. So stop worrying about the high-fidelity gates, the number of qubits, whatever kind of qubit volumes you want to phrase it, and start worrying about what will happen when that happens. Is it going to be existential crisis for the organisation like it is for financial institutions, or is it going to be just a mere embarrassment that your secret data gets exposed? K. Karagiannis: That’s a great point. There is that gut feeling that something could come in between. We’re already dealing with this. We have hybrid–quantum classical. We have solving problems by combination with innovative ways. We have quantum-inspired solving problems on classical hardware with innovative ways. Why couldn’t there one day be something, maybe even that Chinese paper, evolved? Suvi Lamplia: And if you think about the Chinese paper, it was pointed out at some point that they built on some prior research, and then somebody else is going to build on this, and one day, somebody’s going to claim the prize for breaking RSA, because there is a cash prize if you are able to do that. K. Karagiannis: Yeah. And I’d love to put a link to your blog in the show notes so folks can check that out if they want to dig deeper. Suvi Lamplia: It’s on the company side of blogs where I talk most about these sorts of things. My own personal blog is about the Eurovision song contest and other nonquantum stuff. K. Karagiannis: Quantum stuff is going to threaten that too, because they’ll enable better AI, which will write new songs for Eurovision. And the winner will be a chatbot. Let’s take a step back. What does a fellow do at SSH? Suvi Lamplia: I love how my CEO introduces me to some of these things. He often explains that he’s the CEO of the company: “And this is Suvi, our fellow, and Suvi does what she wants.” And I like that, even though that’s not exactly the description of what I do. But there are only a handful of us. There’s […], who’s the senior fellow who invented the Secure Shell protocol. Then there’s […], another senior fellow who’s still active in IETF, getting the IPsec side of things in order. For instance, in the Internet Key Exchange, that’s the furthest along in the standardisation. We will have, quite soon, that framework where you’re able to combine any number of classical algorithms with any number of PQC algorithms to fortify the key exchange. For instance, you could do a situation if you have to follow different jurisdiction recommendations. You could have, like, the Germans, where the BSI recommends FrodoKEM combined with the American-recommended or NIST-recommended CRYSTALS-Kyber, and then throw in elliptic-curve key Diffie-Hellman for good measure. And everybody is happy in their own jurisdiction. So that’s possible to do in IPsec. That’s what those guys did. And then there’s me. I’m the third fellow. Is it a title or a job or a position? I don’t know how do you want to define it, but I got this just a few years ago, so it’s been a new thing for me. And there hasn’t been a fellow at the company for 20 years. It was something that I take very seriously, that I was appointed. I’ve mostly worked on our Tectia product for the past few years, which is the Secure Shell implementation that we have that is used by large governments. It does all sorts of other things that OpenSSH doesn’t quite do. There’s x.509 version 3 certificates on it — there has been for quite some time — and we are able to do more streaming type of transfers and things. I’ve been there trying to figure out, what sort of combinations of these PQC algorithms should we put in there, and the parameter sets? And, obviously, we have a team, so it’s not just me making all these decisions on my own. That has been one side of it, and the other side has been doing these sort of podcasts, or going around the world speaking at various things, getting our customers up and running with their PQC migration programmes and stuff. It has been very interesting. I need to figure it out on my own. Nobody else tells me what to do, and then, at the same time, I get to blame only myself if I have a boring job. It’s a little bit of coding here and there and helping customers on one side, and planning the future. K. Karagiannis: Is the coding for rolling out something in a unique environment for a customer, that kind of thing? Or is it developing a new module? Suvi Lamplia: It varies. I’ve done front-end stuff, mostly. When we had to update, we have this, for instance, trusted-mode OCSP, something on the PKI side that we had to put in place because, PKI — it’s kind of a curse word. If you go into some sort of customer setup where we have to bring out PKI, I get sighs at the other end because they immediately think, “It’s going to be a horrible process” — a project from hell, and there’s a bad rep for it. And we are partly to blame because some of the PKI standards were also originated from SSH. Obviously, we’re mostly known for Secure Shell, but we have contributed quite heavily to PKI standards and IETF things on the IPsec side of things. Many of the things, I feel that they are part of problems that have been done been done in the past, how PKI has been deployed. We have a whole bunch of features in our solutions that I tried to tackle — for instance, large CRLs. We have that trusted-mode OCSP as one way of tackling those 150-megabyte CRLs of some large PKI deployments. One of the things that I was doing now in this past RSA, having the conversations with other industries and other organisations there, was, “Could we please try to fix some of the issues that exist with PKI deployments?” And we’re partly to blame for some of the mistakes there, because SSH has contributed also to PKI standards. In the past, obviously, we were mostly known for SSH stuff, but we have also contributed to IPsec and those things. Not all of the time can you blame the PKI, but sometimes it’s just a deployment, and we have a whole bunch of features in our products that go about to try to solve issues that stem from the fact that at some point, something was implemented into production. One example is the trusted-mode OCSP, which is there just to try to tackle the issue with large CRLs that can be 150 megabytes and so forth. I’m hoping that while we have to touch that thing in any case, when we move to PQC, maybe we could do things a little bit better. And I had some fun responses. Some people called me idealistic, and others said, “No way are we going to be able to convince these large organisations to do things differently,” like, it’s Mission Impossible. And others were, “We’re all on board — let’s do something about it.” But then you have to understand that these problems with public key cryptography have to be done in various stages. Right now, we have to worry about the key-exchange side of things, because that has that retroactive harvest now […] attack on it. But then, when it comes to signature algorithms like authentication and certificates, there are sufficiently long RSA keys and easy DSA keys, or Edwards-curve keys are good, until we have that cryptographically relevant quantum computer. We need to move. I’ve been part of those things, where we have gone from sha1 to sha2, and how long of a process that has been with some of the largest organisations in the world. We need to start moving, obviously, but there’s a little bit of a hiccup there, which is that the key-exchange side was something that we were able at SSH to solve and address a year ago. We could combine that with elliptic-curve key Diffie-Hellman, for instance, […]-certified, and then CRYSTALS-Kyber — that’s the preferred PQC algorithm that NIST is going to standardise. That was easy to do, because the key-derivation function is […]-certified, so we are able to keep the compliance while we do this hybrid mode for the key exchange. And those conversations with various organisations are, like, “Yes, the industry is pushing” — and we’re not the only ones who are, by the way, doing this. Industry is pushing for this hybrid approach for the key-exchange thing, because we don’t have that much confidence on the PQC algorithms. Let’s face it — they haven’t been around long enough. So we want to make sure that if something is discovered, we’re not, at least, worse off than we were before. So it’s going to be at least as strong as each of those combinations. K. Karagiannis: Because you still have the other mechanism. Suvi Lamplia: Yeah. And then, even going forward, it’s probably a more difficult task to crack this combination of elliptic-curve key Diffie-Hellman with PQC algorithm than it would be with just the PQC algorithm alone. But then when we come to the other side of the public key cryptography thing, which is the signature algorithms, there, at least, I haven’t discovered any feasible practical ways of doing hybrid signature algorithms, because the signature algorithms need to stand on their own merit. This is a bit more of a hairy situation. It also requires more industrywide effort, because we cannot just solve it. This involves all the CAs — and not just the public CAs, but all the governmental and organisational, corporate CA type of implementations and stuff. So that is going to be a difficult thing. And even though, already now, in our Secure Shell implementation deck, you could do elliptic-curve key authentication, for instance, with certificates, and you could, at the same time, require RSA-based certificates, it’s not very user-friendly if you have to plug in a couple of different smart cards or some other hardware security module devices to use it. I’m not convinced yet that there is like a beautiful way of doing this hybrid thing. We might have to go to CRYSTALS-Dilithium as a stand-alone thing at the point where we actually have those standards. But we couldn’t wait for the standards when it comes to key exchange, because we needed to protect the secrets that our customers had already last year, or even before. In some sense, we’re already way too late moving to secure that side. K. Karagiannis: Yeah, let’s double-click on some of this and dive into how you’re implementing this in the real world. It’s through NQX? Suvi Lamplia: NQX is our IPsec-based encryptor product. It is able to do site-to-site. You could potentially also get your laptop hooked into this encryption device, but in most use cases, it’s for site-to-site. You put everything in it: Ethernet frames, IP, the whole nine yards — everything goes through that. And it gives you a peace of mind. If you don’t want to figure out so much, like, how do you actually use TLS and what have you in your application-level protocols, you just put everything into a quantum-safe tunnel, and you’re good to go. But then there’s also our Tectia, which is like any Secure Shell protocol is able to do port forwarding so you can create tunnels so you can secure not just remote access or with SFTP file transfers, but you can take any TCP application and tunnel that through Secure Shell. We have had some kind of enhancements to how we can do dynamic or transparent tunneling with Tectia. For instance, one of the use cases that we have, which is kind of a funny story in itself — when I started in 2001, I was told, “Don’t worry about IBM mainframes, or mainframes in general, because they’re a dying breed — dinosaurs, going away.” And then, what do you know? A few months ago, we released the upgrade to our Tectia on IBM mainframes because even though IBM is throwing bucketloads of money into the quantum-computing side of things, like making sure that they would have the superconducting quantum computers, there didn’t exist anything to secure the current communications on IBM mainframe. So we’re happy to do that for our customers, obviously. And then, how we do that is, we will capture what would be insecure connections, or in a situation where, for instance, you could do the same thing with TLS 1.2 version–protected things that are not going to be quantum-safe. You could capture that traffic and then put that into a quantum-safe tunnel and be happy with it while you figure out how you’re going to actually upgrade from, for instance, TLS 1.2 to 1.3, which is a prerequisite before you’re able to put the PQC algorithms or the hybrid algorithms in. As a company, while we push forward the standards and we want things to go forward on that side, we still have the legacy to deal with. And that is also something that quite often slips people’s minds — that when we have something new and shiny in the future, you forget we have all this other stuff that still is used today. And if I’ve learned anything in my 20 years, it’s that you have these grand plans — how are you going to sunset some sort of system? And this happens and we’re going to upgrade it, and then it doesn’t quite go the way you planned or with the time frame you had. My advice to everyone is, plan some sort of quantum-safe solution that you’re able to use during this time when you’re transitioning. That data is still incredibly valuable if you’re using some sort of ancient system with it. There’s a reason you’re using something old. You don’t use it just because you want to use something old. You use it because there is some business need behind it, and you are thinking what those critical things are. I’m so glad that the U.S. government has done its bit here, because it has put pen on paper in the White House on several occasions that has changed the trajectory of how we get pushed to do things — not just the industry, but how the customer side gets pushed to do things. K. Karagiannis: I did a whole episode on the White House memorandum — the NSM-10 document. Suvi Lamplia: Yes, I did listen to that. Thank you for the education for the masses on that one. K. Karagiannis: I figured that that’s going to hit. Industry regulators are going to just cut and paste that. They’re not going to make up their own new knowledge on the subject. The money was spent, the research was done. This is what the private sector can expect. Suvi Lamplia: Yeah, certain verticals. And that’s also a unique position at SSH that we get to see into all of these — the financials, the governments, the health care and all that. And that’s one of my personal things that I feel so torn about, with the fact that the healthcare industry is not moving, because the data they’re protecting — your DNA, your health records — it will never expire. That information should be kept secret until end of days. And in Finland, even when the person is deceased, you’re not supposed to get their health data — not even your immediate family member’s data is supposed to be released, so it’s supposed to be kept secret. But healthcare will not move until somebody whips them into it. Whereas the financials from our customer base, we’re seeing that they have their ducks in order — they’re moving. Some of them are more ahead of this than the others, but they’re all acknowledging the problem. They’re moving, but it also comes down to how they are regulated, because it doesn’t matter how they lose their primary account numbers of their customers. If they lose them, that’s it. They incur incredible fines for that. And then we come to the point where, when there are viable production-ready solutions out there in the market, you cannot go behind “We didn’t have standards yet,” or “We didn’t know better,” because we’re already at that stage. If you’re able to do this with that hybrid approach where you combine something that is required by the compliance, do it safely with a well-studied key-derivation function, put the PQC part in there and get the key-exchange side of things sorted. K. Karagiannis: You mentioned earlier that you’re part of a consortium over there. How do all those other groups view NIST? I view it as, like, is this going to be a major moment that everyone else is just going to follow? Or are other countries and other consortiums going to be, like, “That’s cute, but we’re going this way.” Suvi Lamplia: Everybody will follow what NIST is doing, but then they will not necessarily adopt what NIST recommends. For instance, BSI is the German standardisation, and the French, they have chosen, for instance, FrodoKEM. That didn’t make it past the third round, but in this competition, FrodoKEM wasn’t disqualified, because it wasn’t like they had concerns about its security. I think it was disqualified from the point of view that they felt that CRYSTALS-Kyber, being the fastest of the lattice algorithms, was more suited for the use cases that they were thinking. And then the Germans just went with a more conservative approach, because it’s not a structured lattice that FrodoKEM is based on. Did history play a part here that they want to be more on the safer side, thinking, like, what had happened in the past to some of these encryption things? It also comes down to, how do we know that that cryptographically relevant quantum computer exists? There are parties in this world that will not tell us if it exists. There’s also that sort of thing — when NIST makes some recommendations, are they informed by something that they’re privy to, some information? How does it work? When we have to make those decisions, what kind of options do we offer for our customers in our tech? We chose the level-five parameter set because we didn’t feel that the level ones were future-proof enough. Level threes at this stage doesn’t make sense, possibly not for these online protocols that we deal with, mostly, but it might change with the standards. But most likely not, because when we made those decisions, it felt really nice when the CNSA 2.0 came out last September —after the releases that we had made — and said that we actually require in the future, for all classification levels, the level five, which means that we target the 256-bit security level. That is something that it makes things easier on the practical side, because we don’t have to now think about what level of environment are we dealing with, what algorithms we choose. It reduces the complexity. It reduces the chances for human error when there’s only pretty much the one choice. And for those customers who are a bit iffy about NIST, the option they can use with our products is elliptic-curve Diffie Hellman with Edwards-curve combined with FrodoKEM, for instance. Also, in the Secure Shell side of things, we’ve implemented the OpenSSH choice, which they chose, streamlined NTRU Prime combined with Edwards-curve. It’s not the level-five parameter sets, but it’s there because of interoperability reasons. And then, when we went ahead, we had to also implement at the time — because we didn’t know what NIST was going to choose — FireSaber, the strongest parameter set for that combined with relative curve-key Diffie Hellman. But that was already removed from the defaults in last August, after NIST came out with its round-three selection on it. You have these options, but the AES, that is the standard everywhere in the world, and its […] standard, that didn’t start its life as American. None of these PQC algorithms started their life as American in any way. They’re very much an international effort. Many Finns have worked on various algorithms on this. AES started its life as Belgian. It was Rijndael, and it was more a Belgian algorithm than anything else. NIST standardised it — nobody questions it anymore like that. It’s a NIST-chosen thing. And by the way, it’s something that I find odd in many ways. When we talk about these things, we should make clear distinction that even though Grover’s algorithm on the quantum computing side attacks symmetric algorithms as well, AES — as far as we’re aware — unless there’s some sort of new attack vector discovered in the future, it’s going to be quantum-safe for the foreseeable future. We don’t have any reason at this stage to change the symmetric algorithm. K. Karagiannis: It’ll be weakened — AES 256, for all intents and purposes, becomes AES 128. Suvi Lamplia: It’ll be long before you and I — and our children and their children, and so forth. K. Karagiannis: It just weakens it — it doesn’t defeat it. Suvi Lamplia: Yeah, and then you have to think about the use cases for it. For instance, in these online protocols like Secure Shell and IPsec, we do […] keys frequently. For instance, in Tectia, it happens every gigabyte, or every hour. While that is not quantum-safe — I like this term that I heard when I was in Singapore: It’s quantum-annoying. It’ll make it more difficult to crack it when you’re constantly changing the session keys within a session. Now, TLS might be sometimes a little bit more tricky, because you can do with TLS sometimes — using the same session. If you do a restart on TLS, that’s a slightly different thing. And then, of course, when you talk about data-at-rest type of situations, I wouldn’t use AES 128 for that at this stage. I would definitely use AES 256 for those, which probably — hopefully — everyone is already doing. And then, of course, there are a whole bunch of other problems. I’m more focused on those transport-protocol type of things. But of course, as a company, we do also other things. We have email type of encryption — secure stuff for that — and we do document-signing things. And sometimes, those signature algorithms come up. As an example, how horrible would it be, when the deed to your house, you could no longer prove that it was signed by you? But we have mechanisms for that. That was one of the topics mentioned at this year’s RSA — that you can do some sort of ledger on it to prove that it was actually signed in year 2022 and not by a cryptographically relevant quantum computer in the future. But then you have to understand where they’re used. In these transport protocols, when it comes to the authentication, the challenge is sent, and it’s signed right there and then. So it has to be a PQC algorithm for those use cases — hopefully, sooner than later. K. Karagiannis: You brought up a lot of points that people just are not considering. The subject matter is usually just RSA. That’s all you ever hear. It’s just that one thing. Suvi Lamplia: It’s easy to explain, because you can explain it from the historical point of view that in the past, we could just double the key size of RSA, and we were good to go. And then we’re, like, “What about the mod P groups of every single VPN?” — you know, stuff is using that Secure Shell. Those are the standards, and then, that suffers a similar fate to RSA going forward. K. Karagiannis: People need to understand that this is going to be about, we’re going to have new standards, we’re going to have new tools, we’re going to have to look at what legacy things still need to be implemented, where the crown jewels are. This is like an assessment and a walk-through and an implementation of protecting crown jewels, critical things. It’s not just, like, “We plug in one little thing, and we’re good.” This is going to be a major thing people have to start addressing, and have their hands held, possibly. Suvi Lamplia: Exactly. I often tell people, remember how painful it was going from sha1 to sha2, and how long it took? One of the reasons we were able to do this key exchange already now into this production-quality stuff is that we have done this before in Tectia. In 2011, we implemented the sha2 variants into Tectia. Ahead of the standards, we were able to do that, because the Secure Shell protocol has been designed so that you can do easily these @ssh.com extensions, and it doesn’t break interoperability with everybody else. TLS wasn’t planned with that in mind. It has only come in the latest versions that they actually sent these bogus protocol messages to make sure that the other side will not trip on its feet when it encounters something it doesn’t understand. We did a bunch of stuff with SSH1 that we’re not proud of. We first want to admit that there was man-in-the-middle attack with it, and a few other things. We fixed a lot of things in the beginning already with the SSH2 protocol that TLS 1.3 is addressing going forward. We were able to do this because we did sha1 in 2011. Then our implementation eventually became the standard in Secure Shell, and then our customers had been compliant already for years before it became the standard. Now we’re hoping the same thing happens with these post-quantum hybrid key-exchange algorithms in Secure Shell. It was 2011 when our customers were able to use sha2 algorithms. Last year, we thought that it’s a good thing to now get rid of sha1 — we can disable it from the use cases where it matters. There’s only one use case in these transport protocols where you’re still allowed to use sha1, and that’s with HMAC, because that is not based on the collision resistance of the hash. And even that is going to be removed in the future, because it’s too complicated for customers to understand where you can and where you cannot use sha1. We removed sha1 from the defaults last year, thinking that we had given our customers over 10 years to migrate, and they thought that they were already past the migration phase from sha1 to sha2. And guess what? Lights went up like in Christmas tree in some of the organisations, because even when they thought that they had done everything they needed to do getting rid of sha1, they had not. And for some of them, it came as a bit of a surprise that they still had something in some corner of the organisation using sha1 algorithms on the Secure Shell side of things. I’m telling them that the good news is that we’re going to get to do a whole lot of work again together. And the bad news is that we have to do this faster — K. Karagiannis: And it’s way more insidious and everywhere than that sha situation, right? This is everything. It’s going to be a big problem. And I’m glad you brought up a lot of these points today, because people aren’t thinking about all the facets of it, and I appreciate that. With SSH, of course, it’s one type of tunnel, but obviously, as you point out, point-to-point, you can do a wrapper — you can have everything go through. So there’s a lot at work here, and when I think of companies that want to start protecting critical data now, these types of solutions are useful because point-to-point might be where they’re worried, or they are just worried about their admin logging into a server and that kind of information being obtained or something. So, these are more of the tools in the tool box for getting us there sooner. Do you want to leave us with any thoughts about what you feel is coming next month, after this airs. We are expecting more surprises from NIST. And then, of course, in 2024, we’re expecting the standard. Do you have any last thoughts on what you see coming down the pike? Suvi Lamplia: I don’t expect too-drastic changes. I would be very surprised if there would be a big news bomb. It’ll be just be a continuation of what has already passed. Don’t quote me on that, because whenever people make predictions, things might often not quite end the way you predict it. But when transitioning to PQC, when you put it this way, when you’re thinking of what you’re going to do next and how you’re going to solve some business problems, have that in the conversation. Ask your suppliers, “How are we going to get there?” Even if their solutions don’t have it right now, they should have at least some plan about how they’re going to get there. We think that PQC, post-quantum cryptography, is the most economic efficient way of getting us ready in time, because it’s available now. It’s not some promise where that will come at some point. What is more important, even if we would have all these resources to do optic fibers all over the planet and satellite connections and beautiful weather to be able to use those satellite connections and all that, you still leave that end-to-end security. And I don’t want to go back to the days when I would have to forgo my mobile phone and start using a telephone again with optic fiber, getting into a door, trying to do some sort of line-of-sight base station with my mobile phone. I’m quite happy with this Finnish invention the way it is. Getting the PQC in there, it’s not the most glamorous, especially listening to all the other stuff that you have in your podcast with quantum key distribution, other fantastic phenomena. But it is a robust way that can be used with proven in-use technologies efficiently and in a safe way now, when it actually matters. K. Karagiannis: QKD is fancy, but it is point-to-point. It’s super limited. Suvi, thank you so much for coming on. I appreciate this. This is going to be one of our longer episodes because there’s just so much to say — I thought this was just great. Thank you. Suvi Lamplia: Thank you. This was really fun, and I hope to get to see you at some point in in real life as well. I’m sure we’ll bump into each other at some of these events that we go to. K. Karagiannis: Now, it’s time for Coherence, the quantum executive summary, where I take a moment to highlight some of the business impacts we discussed today in case things got too nerdy at times. Let’s recap. It’s hard to say exactly when quantum computers will pose a threat to cryptography. We might get a sufficient number of error-corrected qubits sooner than expected, or we may see new encryption-cracking approaches that require fewer qubits. Whatever happens, we know the threat is approaching and regulators will force the migration to post-quantum cryptography long before the threat arrives. SSH Communications Security, named for the famous Secure Shell protocol, has been protecting all types of data in motion since 1995. In anticipation of NIST’s coming new cipher standards in 2024, SSH has already started using a hybrid approach where existing ciphers are combined with post-quantum ones. Should the classic cipher fall to quantum computing, the post-quantum cipher will prevent decryption. And if the new PQC cipher has a flaw, for now, we’re no worse off because of the existing cipher’s encryption. SSH has developed a number of PQC solutions that also protect against classical attacks. A few notable ones include NQX, a quantum-resilient encryption solution for transporting Ethernet and IP traffic across any network, private or public; Tectia Quantum-Safe Edition, an application-layer encryption solution for securing data and transit and TCP IP networks — this includes file transfer, application-to-application, machine-to-machine and secure remote access — Universal SSH Key Manager, or UKM, which can enforce the usage of the quantum-safe KEX algorithms; and SSHerlock, a postquantum resilience discovery and audit tool that aids in transitioning to quantum-safe security in existing Secure Shell estates. Even though we’re all awaiting the NIST standards, SSH is in Finland, and Suvi points out how cryptography is not always of U.S. origin. It will be interesting to see how the world works together to start addressing the quantum threat, especially after the standards come out next year. That does it for this episode. Thanks to Suvi Lampila for joining to discuss SSH’s approach to post-quantum cryptography, and thank you for listening. If you enjoyed the show, please subscribe to Protiviti’s The Post-Quantum World, and leave a review to help others find us. Be sure to follow me on all socials @KonstantHacker. You’ll find links there to what we’re doing in Quantum Computing Services at Protiviti. You can also DM me questions or suggestions for what you’d like to hear on the show. For more information on our quantum services, check out Protiviti.com, or follow @ProtivitiTech on Twitter and LinkedIn. Until next time, be kind, and stay quantum-curious.