The Cryptographers’ Panel

Posted on in Presentations

Each year the founders and leaders of cryptography take to the RSA Conference keynote stage to debate and discuss the most pressing issues facing the cybersecurity industry and our increasingly digital society. And each year, the stakes seem to be higher. Join this year’s panel to learn what’s top of mind for 2022.

Recommended Reading Available in Our Bookstore:
The 4th Edition of Introduction to Algorithms by Cormen, Leiserson, Rivest, and Stein

Video Transcript

>> ANNOUNCER:  Please welcome Chief Scientist and CEO of Aura Labs, Zulfikar Ramzan.


   >> ZULFIKAR RAMZAN:  Welcome to RSA Conference 2022. It's hard to believe that two years have passed since we are back on this stage, and while so much has happened and so much is different, there is at least one constant that has remained throughout my time at RSA. I wore the same shoes.


   Converse, you can thank me later and I will take the check when you want.


   As you know, this year's theme is transform. And I'm super excited, I'm proud, and I’m honored to welcome to the stage four individuals whose foundational contributions have truly and unequivocally transformed our industry.


   Please join me in welcoming Adi Shamir, the S in RSA, Whitfield Diffie, Dawn Song, and Moni Naor.


   So, I want to start off with a topic that actually we discussed this time last year, and that's the topic of the non-fungible token, or NFT, which I’m sure many of you are familiar with. It's a super hot area right now. And it's been through ebbs and flows and I want to start with the perspective from this panel on where we are with respect to NFTs. And I thought no better person could start this conversation than Moni Naor, who, in 1993, together with Cynthia Dwork, invented the idea of the proof of work protocol. And that's been the foundational element in many of these technologies.


   So, maybe starting with you, Moni, can you give us a perspective on where we are with NFTs and where do you think it is going to go?


   >> MONI NAOR:  Well, I'm not in the business of making predictions, but NFTs are something that are very easy to make fun of, and you can really be cynical about them. I think, and, of course, we can – we are aware of what happened to the value in recent months and whatever since, I guess, you last met here.


   But I think that the interesting issue with respect to NFTs is what – where do you actually store the – whatever you are buying there? You can't store it on the chain because it's too big, so the issue is how do you develop preservation techniques in order to preserve the actual object that you are buying? So, you want to develop a whole ecosystem for preserving, let's say, images or what have you.


   >> ADI SHAMIR:  I want to add that the numbers are not lying. Since last year, since the peak, they are – sales of NFTs have gone down by 92%, and this can be also exemplified by one of the most famous cases where a high-tech entrepreneur in the US bought Jack Dorsey's first Tweet on Twitter. It had five immortal words, just sending this on Twitter, and he paid $2.9 dollars for this NFT.


   >> WHITFIELD DIFFIE:  $2.9 million.


   >> ADI SHAMIR:  Sorry?


   >> WHITFIELD DIFFIE:  $2.9 million.


   >> ADI SHAMIR:  $2.9 million, yes. And he – so, the word just is worth about $600,000. And about a year later, a few months ago, he tried to sell it. He put a floor price of $48 million, and the highest bid that he got was $14,000. So, you can see how much it cooled.


   >> ZULFIKAR RAMZAN:  Phenomenal. Like I said, we’ve seen that, you know, the non-fungible tokens have maybe not panned out as we thought they would. I want to switch to fungible tokens, and of course, the biggest example of that are cryptocurrencies.


   Now, a couple of years ago we had a very spirited debate because as many of you know, crypto stands for cryptography. But these people out there who were trading in cryptocurrencies kind of co-opted our term and usurped it and started talking about crypto, it means something else.


   With these dwindling prices of these cryptocurrencies, I think we have a chance to reclaim our original term back to cryptographers.


   I wasn't expecting applause on that line. I guess many of you have had really bad luck with your crypto portfolios.


   But I want to switch to Dawn for a moment.


   >> WHITFIELD DIFFIE:  I was told that somebody showed up at the Crypto Conference in Santa Barbara two years ago and was furious and wanted his money back because this was a misrepresentation. This conference wasn't about crypto.


   They pointed out, well, it was the 40th anniversary of the conference.


   >> ZULFIKAR RAMZAN:  Yes. Forty years this past year. So, I mean, so, Dawn, I know that you spent a lot of time thinking about cryptocurrencies and blockchain and thinking about some of the foundational issues. What do you see as some of the challenges in this space and how do you see it evolving?


   >> DAWN SONG:  Yeah. That's a great question.


   Blockchain and web theory and cryptocurrency, I think it is a really fascinating area, and there are so many, often, challenges such that actually at Berkeley, we recently started a new center called the Responsible Decentralized Intelligence to actually study the different research challenges in blockchain and web theory. And here I can just give a few examples.


   So, first is scalability, in particular, how can we develop scalable infrastructure with decentralized trust. So, just to give you one example, I think many people have heard about Ethereum as the first smart contracts platform. It has many scalability issues. And as just one data point, so, recently Yuga Labs sold NFTs for their digital land deed, and in the very short period of time, they sold NFTs worth $300 million. But however, more than $180 million worth of Eth was burned just to make those transactions because of the congestion, the high gas fee, and so on.


   So, that's just one example when you are just trying to sell some digital land deed, you actually – you know, the chain is not sufficient to support such transaction requirements.


   So, we definitely need a lot of new work in how to build scalable infrastructure, but at the same time, still support decentralized trust. So, that's number one.


   Another example is, of course, here, this is a security conference. So, security in blockchain, cryptocurrency, and web theory is super important as well.


   So, at RSA, people talk a lot about attacks and how much financial losses have been caused by these attacks, and in the crypto world, when you have these losses, they are actually really real.


   So, for example, even just this year, only half of the year, already, the financial losses caused by attacks in the blockchain/crypto world has been over a billion dollars. So, for example, the Ronin Bridge attack alone caused over $600 million of loss just being stolen by attackers, and that's just one example.


   And this really caused, you know, this really requires new technologies how we can provide better security, better security for the infrastructure, better security for applications, application developers, and also for users, and how we can develop technologies to enable provable guarantees of the securities of these systems.


   And then just to briefly mention then another example is in privacy. In particular, as we know, so first of all, even though Bitcoin and Ethereum is – people think it's private, but actually it's not private, it only provides pseudo – pseudo-anonymous, and there is still a lot of information that can be gained from that. And then the question is how can we develop better privacy but at the same time we can still enable compliance and enable auditability to enable what we call auditable privacy.


   >> ZULFIKAR RAMZAN:  That’s phenomenal. And you know, it’s interesting. I’m sure, Moni, you never predicted, when you wrote the paper with Cynthia, that that one paper would have a bigger impact on global warming than probably anything else imaginable. So, I think –


   >> MONI NAOR:  That's true, though if you recall that paper, we had – we suggested shortcuts for the computation so as that it will be the exception – to actually do the proof of work would be the exception, not the rule.


   >> ZULFIKAR RAMZAN:  So, I think it's a great example where we are seeing struggles with design in general in these types of systems. And, you know, if I could kind of pivot a bit, I think another area where we see design issues come up specifically is around mobile.


   I know that when you think about mobile and the form factor, things like password and biometrics, how do you start to kind of think about all this landscape in a meaningful way?


   I know, Whit, you spent a lot of time thinking about this particular area. I would love to get your perspective on mobile and passwords and biometrics.


   >> WHITFIELD DIFFIE:  Well, I – I think I may be the world's last big fan of passwords. I can probably get them cheap as people sell off.


   I think security officers like other things, like biometrics, because that gives them more control of people and that's what they’re into.


   And I think there are lots of aspects of passwords, but the essential point is you are proving that you have a right to access something by what you know. Now, you know, maybe sending it over the wire to somebody to compare, things like that, you can prove that.


   But the passwords are sort of the only thing that seems to me to have two properties. One is that the root of getting crypto security in what you want, and the other is, and maybe the only thing that has any hope of sort of a Fifth Amendment right not to tell it to somebody, though the government is working as hard on coming down on that as it can.


   And the other direction, you know, I – I lack confidence in biometrics in the sense that I think there might be all of a sudden something discovered get a huge failure. Yeah, of course, you’ve got somebody in line at an airport and you are looking at their eyes, you know, they probably can't do anything about that. But if you are trying to examine their irises when they are looking at their computers at home, maybe somebody can come in with an artificial eye and take over the computer. So, passwords’ last stand.


   >> MONI NAOR:  I am also a fan of passwords. I don’t know if you’re a fan, but I don't see anything that will replace them.


   >> ZULFIKAR RAMZAN:  I think that's what I love about this panel. I mean, you kind of were keeping it real and going back to the foundational elements and the fundamentals. And I think when you look at design of any of these complex systems, there is a famous adage that if something cannot possibly go wrong, it will. And so, I think here we have learned that over and over again and I think we are going to see it again in the context of design and implementation.


   Maybe, Adi, I can turn it over to you. One of the biggest challenges we see with cryptography in any of these technologies is how you implement them and make sure that implementation is correct and whether that's intentional or otherwise. And I would love to get your perspective on that area and how you see that developing especially over the past year.


   >> ADI SHAMIR:  So, let's look at the issue of – of zero-day attacks. There was a report published by Microsoft which actually stated that the number of zero-day attacks had doubled since last year, so the numbers are increasing rather than decreasing in spite of all of the effort.


   One of the areas which is most affected by such zero-days is the area of mobile security, and you know that this is a very hot battleground where some hacking companies which are trying to sell their products to governments and to law enforcement agencies are trying to find new zero-days, deploy them, and within weeks to months, they are discovered and blocked, and then they have to come up with new ones.


   The two leading companies in this area used to be a hacking team based in Italy. They went out of business in 2020, two years ago. And the other big player in this field was NSO, which produced the Pegasus spyware. And they used to be a billion dollar company and they had sales of about $250 million per year. But the US government put them on the blacklist a few months ago and according to the Financial Times, since then, they didn't have a single sale. So, they are in big trouble and they might also fail.


   So, these two biggest companies seem to be out of business, but there is no vacuum in this field. There are other smaller companies which are just going to move to the front, in my opinion. The problem has not been solved in any way.


   >> ZULFIKAR RAMZAN:  So, I think that what also comes up in this context is you know, you have the typical implementation issues, but in cryptography is also another sort of fundamental issue lurking, which is that if someone is able to build a quantum computer with scale, they could use it to compromise the security of a number of well-known algorithms, like the Diffie–Hellman algorithm and RSA in particular. I know that at least two of you on this stage have a, you know, certain dog in that race that you would like to see an outcome around.


   You know, Adi, tell us, where do you see the current state of quantum computing and how is that going to evolve? And, you know, if you are bold enough to make a prediction, I will put you on the spot. A lot of people want to know.


   >> ADI SHAMIR:  So, I am following this issue very, very closely. You know that for the last couple of years, the state of the art in building quantum computers was to have devices which are between 50 and 70 qubits, and with no error correction. And this could be used for essentially public stunts – publicity stunts like claiming quantum supremacy where you did – Google did totally use this kind of computation, claiming that no one else could have used a regular computer in order to do it in less than an enormous amount of time.


   Now, at the end of 2021, IBM had to show – to show a device with 127 qubits according to the white paper they published a few years ago. And well, in December 2021, the President of IBM indeed came out claiming that they have constructed a quantum computer with 127 qubits.


   Since then, almost half a year has passed. I haven’t – I was unable to get any information about the status of this machine. They didn't publish whether it's successful, unsuccessful, what are its capabilities besides this one number, which is the number of qubits.


   So, it's a big question. One could interpret it as if they have been very successful, but they are gearing up to making big announcements about the capabilities. If you want to be pessimistic, you can say that they are running into trouble, having this computer. I have no idea what the truth is. Everyone is very tight lipped.


   Another big development was at Microsoft. Microsoft had bet the house basically in quantum computing on a technology which is using topological qubits. They are using Majorana quasiparticles and they had for about ten years, they were unable to build even one such qubit. A few months ago, there was a mini breakthrough where they could demonstrate the existence of Majorana quasiparticles, which wasn’t certain up to that moment, but they don't have their first real qubit. So, everyone is struggling in this field.


   >> ZULFIKAR RAMZAN:  Which I think is good news for both the RSA algorithm and Diffie–Hellman for now, so we are not quite out of the woods maybe in the long-term.


   How do you see the development of, you know, there is a lot of work right now in developing algorithms that can be resistant to quantum computation, so-called post-quantum algorithms. Where are we in that struggle? And RSA in particular, it took a good fifteen to twenty years from the time that the algorithm was first invented until it could become mainstream and accepted and people had confidence in it. How do you see that race playing out in the world of post-quantum algorithms?


   >> ADI SHAMIR:  So, as many in this audience know, NIST had started a process of selecting post-quantum crypto systems and signature schemes. The process started in 2017 and it went through seven – so, three major rounds of analysis.


   Initially, there were sixty-nine proposals put forward by various researchers around the world. Now, in the third round, it's down to seven. The four public key encryption algorithms and three public key signature schemes.


   Now, an interesting development which happened just a month ago was an announcement by Israel military cryptanalysts showing that three of the seven finalists actually had some security issues and the actual security dropped below the threshold dictated by NIST as the required security levels for various key sizes and types of crypto systems.


   So, the affected schemes are Kyber, Saber, and Dilithium. Dilithium is a signature scheme. And it dropped – the security dropped by various amounts up to 2 to the 14th, which is 16,000 times faster to attack than predicted before.


   An interesting point I want to make is that this, as far as I know, this is the first time that military cryptanalysts anywhere in the world had published a paper in which they improved on the best known attack against major crypto systems which are about to be standardized.


   And the funny thing is that even though this paper was published and it was published on behalf of the Israeli military unit in charge of evaluating security of crypto systems, they couldn't find – they couldn’t publish it in natural places because the archive, for example, does not allow the publication of anonymous papers and this paper had no names associated with them. So, the cryptanalysts wanted to keep their anonymity and archive refused to publish a paper without the authors’ names.


   >> ZULFIKAR RAMZAN:  That's privacy for you. That's amazing.


   You know, so I think this is interesting. We have hit all the buzzword bingo. We’ve talked about quantum computing, we have NFTs. We’ve got Bitcoin and blockchain. We have to get the complete buzzword bingo by talking about machine learning. I think we haven’t talked about that yet.


   And machine learning, I think, and cybersecurity have an interesting intersection point around areas like privacy preserving machine learning or around adversarial machine learning. I know, Dawn, a lot of your research, some of which you actually talked about on this stage before, has been very salient. Can you talk about some of your thoughts in this area and where – where do you see things heading?


   >> DAWN SONG:  Great. Yeah, thanks. I'm really glad that you asked about these two machine learning related areas in the same question, the first one being privacy preserving machine learning, and the second one being adversarial machine learning.


   These both are really important topics, however, for these two, I have very different outlooks for – for each one.


   So first, we’re going to talk about privacy preserving machine learning. I'm really excited and optimistic about privacy – privacy preserving machine learning. Both in academic research and also industry, we have made huge advancements in this area, including both algorithmic advancements developing new cryptographic algorithms and optimizations for – for [00:22:10] encryption, so you can multiply the computation, and with their applications in the area of machine learning to enable privacy preserving machine learning.


   And a lot of these techniques have even been theoretically integrated into machine learning, common machine learning frameworks such as TensorFlow and PyTorch and so on. And also, we are seeing hardware accelerations for these cryptographic methods as well.


   And also at the same time, we are seeing advancements in another type of approach using secure hardware. So, essentially utilizing actually some trace assumptions in the hardware mechanism to enable even more practical secure computing that can support even larger scale privacy preserving computation.


   And I'm really excited that even for Nvidia, the later generations of GPUs will actually have secure enclaves coming with it. I’m super excited about it.


   So, I actually have a prediction that I talked about in my previous keynote that I predict in ten years, secure computing, privacy preserving computation will become commonplace, will be prevalent, and secure enclaves will be prevalent, will be commonplace, and also will have very widely deployed hardware acceleration of these cryptographic algorithms as well.


   So, I'm really excited and really optimistic that in ten years, we will be at such a different place in privacy preserving machine learning which I think will be a fantastic thing for the whole world.


   And then separately, on the adversarial machine learning, I'm actually very pessimistic on the other hand about adversarial machine learning. So, four years ago in 2018, at the RSA, I actually was on the Hugh Thompson show, showing our work, adversarial examples, illustrating that these adversarial examples can even happen in the real world with physical adversarial examples and so on.


   And actually, some of our work has – the artifact has been in exhibits at Science Museum in London as well. That was kind of fun.


   And since then, four years, every year, actually, there has been continuously hundreds of papers, even like a thousand papers published in the domain, but, however, we really, as a field, have not made as much progress. The progress in comparison to the first area I mentioned is very different. We still don't have good general defense and I will say we are very far from that, to the extent that I think in ten years, in twenty years, we will still be battling this challenge.


   I think Adi has also done some great work in this space and he can share his thoughts as well.


   >> ADI SHAMIR:  I have been working in this field in the last couple of years due to the close connection with security. It is clear to all of the practitioners in this field that deep neural networks are exceptionally fragile with respect to tiny changes.


   There is a classical example where you see an image of a cat, which is, of course, recognized by any deep neural network as a cat. And then you make an imperceptible change in the image and this well-trained deep neural network which is so good at recognizing cats says that this is guacamole.


   Now, how can you – how can you mix guacamole which is usually green with a gray cat? There should be a trivial way how to separate them.


   And people have been struggling with finding ways how to stop these kind of small adversarial perturbations from being effective against deep neural networks. And Nicola Catalini (ph), for example, had been – had great fun breaking again and again most of the proposals made how to protect against it.


   It seems to be ingrained in the model of deep neural networks which works very differently than the human brain. Even a two-year-old when shown a picture of a cat and a picture of guacamole will clearly separate them, and they will not be confused by the imperceptible changes. But deep neural networks are just looking at all of the given examples and building some strange theories about what is it about this image which makes it a cat. And it's not at all what we think about.


   I published some models call the Dimpled Manifold Model which tries to explain this previously unexplainable thing.


   So, why is it very important in security? Because in the last year, people have published many papers showing the weaknesses in deep neural networks. For example, the Lior Wolf from Tel Aviv University, a well-known researcher in computer vision, had shown that the three best known face recognition programs can be easily fooled by what he called master faces. A master face is an image which looks very ordinary. You can go and find images of those master faces. And they show that ten normal looking faces will be wrongly recognized by existing face recognition faces as being the same as 40% of the population on earth.


   So, take a random picture of a person and one of those ten particular master faces, and the deep neural network will say, yes, they are the same person.


   I am working now with a student of mine on the opposite problem, and we use this fragility of deep neural networks in order to show that by poisoning the training process, I can make me invulnerable to surveillance by street cameras. So, I can walk in the street and the deep neural network will be so sensitive to tiny changes in my own face that when I walk past the first camera, and I walk past the second camera, and they look slightly in a different direction, this is going to be enough in order for the camera not to notice that it's the same person.


   So, people can avoid surveillance by street cameras if they can poison the training process in a very, very minute way.


   >> ZULFIKAR RAMZAN:  That's incredible. I think this is why we have to have these conversations because these technologies sound incredible, you know, in the media, and they have all of this promise, but unless we look at it from a security lens, when they get widely deployed, they can create far more problems than they solve and I think it's important to have that discussion now.


   We can look at these problems from a technological lens. The other lens that people are looking at problems with, especially related to privacy, is a policy lens. We’ve seen GDPR really take off in the last couple of years. Now, every time I go to a website, I have got to click on which cookies I want to accept or not accept.


   I know, Dawn, you have spent a lot of time thinking about GDPR. You have looked at some studies. You know, where do you see GDPR? Where are we right now and where are we going to head?


   >> DAWN SONG:  That's a very good question. So, GDPR, of course, has made a huge impact in the world, like billions of dollars have been spent in this area, trying to get companies to be GDPR compliant and so on.


   In general, it's a good first step, but, however, then there is a question, given the billions of dollars spent, what is the ROI? What is the return on investment? How much has it really improved the world?


   So, recently we did a meta study. So, there has been actually hundreds of papers written in the GDPR space just to do empirical studies of how GDPR, the different aspects of GDPR, how it has influence in the real world.


   So, we recently did a meta study looking at all of these hundreds of papers to see what's the summary. And we have a number of findings. I won't have time to share everything, but at a high level, so this is also called – GDPR is also consent-based privacy.


   But of course, everyone knows one of the things that everyone has been trained to click, accepting the privacy policies, accepting the cookie policies, and so on. And some studies have shown unsurprisingly that you can put up any privacy policy you want, even saying that, oh, this is malicious, your data is going to be sold, you know, on the internet and so on, and users will still click accept. Right?


  So, this is very unfortunate. And also, there has been studies showing that this type of policy even increases the digital divide. So, the studies have shown that for users who are more familiar with technology and use the internet more, they have a higher likelihood of knowing about GDPR.


   Unfortunately for users are not familiar with technology, how to use the internet, and so on, there is a much lower likelihood for them to know about GDPR and to know that they have these rights and they can exercise these rights.


   So, in general, again, GDPR is a very good first step, but I think as a next step, we definitely need to have better regulations to help improve privacy overall.


   There are a number of different aspects, I think. So, first is earlier we talked about privacy preserving technologies, like I'm really excited, I'm really optimistic about where we will be in five, ten years, and so on.


   But, however, for these privacy regulations, even though it references some secure computing like differential privacy, types of technologies, and so on, but at the – but it wasn't really very well-informed about what the technology can really do. And what we really need is for the regulations to help encourage the adoption and deployment of these privacy technologies to really make a difference instead of just relying on these more consent-based methods.


   >> ZULFIKAR RAMZAN:  That’s a great example. It's a people problem, there’s a technology problem, there’s a policy problem all kind of coming together. I think we have already seen that as well across the board if you look at what has happened in the last couple of years of COVID and how we have had to adapt and do things like work from home or try to adapt with new policies around digital contact tracing.


   You know, I know, Whit, you spent a lot of time thinking about the security implications of broader trends. What do you see as some of the big implications around COVID especially with things like work from home and similar topics in that vein?


   >> WHITFIELD DIFFIE:  Well, I think it sort of transcends security. The first time – I have been for a long time close to something called the Center for International Security and Cooperation at Stanford, which has the best seminars I have ever attended. And I tried to copy everything I could about them in seminars at Sun and it didn't work.


   But one thing about these seminars is a certain informal sort of Chatham House Rules conversations among a whole bunch of people who are retired military and retired government mixed in with academics.


   And the first time we went online, I realized, of course, that had been lost. The informal sense of security was gone.


   But on the other hand, it was being repaid amply. Our speaker was from Romania. Basically, we could afford to have one or two speakers from the other side of the world every year or two, and now suddenly we have a seminar that's opened to anybody of common interest anywhere in the world.


   So, I think that that's – I just think that's here to stay. I think the great frontier, incidentally, is going to be in developing very satisfying hybrid meetings because definitely, people do like to get together in person.


   But side by side with this, what I think of as horror stories about exam giving programs, you know, that claim you were cheating if you looked off into space while you were thinking or something like that.


   So, I think the rights of home users may have to be defended by unions or something against employers who will find more and more ways of making inferences about the employee's actions at home and becoming more and more intrusive.


   >> ADI SHAMIR:  I want to mention something about privacy. There are two different things. One is real privacy and the other is perception of privacy. And this could be seen very clearly in contact tracing applications for COVID-19.


   Many of us were involved in trying to construct privacy preserving contact tracing apps. There was an international consortium of researchers which came up with very nice proposals, and yet, in spite of all of the assurances, the public was very worried about downloading and using those applications because somehow they felt that even those privacy preserving applications which were designed by – not by governments but by academics from all over the world, they felt that somehow their privacy would be violated if they agree.


   So, this led to the – what I would call the failure of contact tracing. A very small percentage of the population in various countries downloaded it. Because of the quadratic nature, you need to find contact between two people, both of which have the – have the application. So, if only 10% of the population downloads the app, the chance of finding all the contacts for COVID are becoming very, very small.


   So, if we want to be honest, it was issues of perceived privacy which doomed the effort to do contact tracing the way it should be. Not that we didn't have good privacy preserving skills, but we couldn't convince the public that they were privacy preserving.


   >> ZULFIKAR RAMZAN:  Yeah. So, I think it's a great example where we see this impact of technology throughout our lives and how it's really impacted how we work, how we play, and how we live.


   Each of you have made phenomenal contributions in cryptography and related fields. How do you see technologies in these areas potentially transforming the world over twenty years? This is a question for the entire panel. I will start with you, Adi, and we’ll go all the way down.


  >> ADI SHAMIR:  The theme of this year's RSA Conference is transform. I want to talk about things that are not so transformable. And if you look at predictions made about the nature of warfare, most analysts, military analysts say that the nature of warfare had changed dramatically, that wars will have – will be won and lost based on cyberattacks and defenses.


   Now, we have a big war happening in Europe between Russia, which is certainly world-class in cyberattacks, and everyone expected them to deliver, after, you know, preparing for a year for this conflict, deliver a decisive blow to Ukraine.


   Guess what? I and most everyone else are underwhelmed by what happened. For example, the number one strategic goal now of Russia is to stop the flow of heavy weapons from the west, from the western border of Ukraine, to the eastern border where fighting is going on. And these weapons are transported by railways. Railways are highly computerized. And I would expect that if this is the number one strategic goal, the Russians are going to play havoc with all of their signaling and their control systems of the Ukrainian rail system.


   But two days ago, when the Russians wanted to stop a big shipment, they used five high precision cruise missiles, Kh-22, in order to do it the old fashioned way, to bomb railway junctions. So, it seems as if this warfare hasn't transformed as we expected.


   >> ZULFIKAR RAMZAN:  Amazing. Whit?


   >> WHITFIELD DIFFIE:  Well, it seems to take the military a while to learn to use things.


   >> ADI SHAMIR:  The Russians have been planning this for a long time.


   >> WHITFIELD DIFFIE:  The next war or two, they should come on that.




   >> DAWN SONG:  Yeah. I'm really excited about the future, the wide adoption of cryptography. What we just talked about, you know, with the blockchain, cryptocurrency, and going into web theory, I think is really exciting.


   And the key here really is about self-sovereignty and also data sovereignty, which is a big area that we have been working. So, with the users maintaining more and more control of their own essentially cryptographic keys, the foundation of different – how users can control different types of assets, it can be  cryptocurrency, financial assets, and later on can be decentralized identity and also your data assets.


   So, earlier I didn't have time to talk about it, when we talk about privacy preserving machine learning and also GDPR, the next step is that we can convert these data also as data assets that the data owners can control and gain benefit from by actually combining blockchain and privacy computing technologies and so on.


   So, this can lead us to building a responsible data economy that I'm really excited about.


   >> ZULFIKAR RAMZAN:  That's incredible. Incredible. And Moni, last but not least.


   >> MONI NAOR:  So, one thing I would like to see is better dialogue between policy makers and people who are versed in making rigorous definitions like cryptographers. So, notions like the right to be forgotten or being specific – singling out that have been – progress on their definition has been made by people like Corbin Ysim (ph) and Alex Wood and Shafi Goldwasser and her colleagues. So, these things came after the legal and regulation framework has been set. I think that the right process is a dialogue between the two communities.


   >> ZULFIKAR RAMZAN:  That's a beautiful closing sentiment because this is why RSA Conference was created, to foster dialogue among different people.


   So, I want to thank each of our panelists today for a phenomenal discussion. The highest value yield, I think, across the entire conference. And I hope you’ll all agree.


   Please join me in thanking our amazing panelists.

Dr. Zulfikar Ramzan


Chief Scientist and CEO, Aura Labs

Whitfield Diffie, ForMemRS


Honorary Fellow, Gonville and Caius College, Cambridge

Moni Naor


Professor of Computer Science, Weizmann Institute of Science

Adi Shamir


Borman Professor of Computer Science, The Weizmann Institute, Israel

Dawn Song


Professor of EECS, Director of Center on Responsible Decentralized Intelligence, UC Berkeley

Share With Your Community