What Matters Most

Posted on in Presentations

Join renowned expert Bruce Schneier as he challenges convention and explores the latest issues facing our industry. A thought-provoking introductory speech will be followed by extensive open mic time where attendees can ask questions likely to span the range of cloud computing, geopolitics, supply chain, privacy, national security, IoT, AI, and late-breaking cybersecurity issues.

Video Transcript

>> Please welcome Bruce Schneier.

(Music playing)

   >> BRUCE SCHNEIER: Hey, wow.  Hi, everybody.  Nice to see you all again.  It's kind of neat.  It's kind of a little scary.


   >> BRUCE SCHNEIER: So if you read the description of this talk, it was basically meaningless.


   >> BRUCE SCHNEIER: And that was on purpose.  So I had to last year describe what I was going to talk about.  I had no idea what the world would be like, whether this would actually happen, when it would happen, so I didn't really know what I was going to say.  So thanks for coming, for kind of taking a flier on this. 

I ended up writing a book during the pandemic.  Delivered it to the publisher four days ago.  So it'll be out in January.  And I'm going to talk about some of that.  And I'm writing a book about hacking, but not about computer ‑‑ not about regular hacking, like we would define it, but generalizing the term to broader social systems.  And what I want to talk about here is what happens when AIs start hacking.

   So I think artificial intelligence will hack humanity unlike anything that's come before.  I think they'll find vulnerabilities in all sorts of social, economic, and political systems that exploit them at really an unprecedented speed, scale, and scope.  And that won't just be a difference in degree; it'll be a difference in kind.  And it will culminate in AI systems hacking other AI systems and sort of us humans being collateral damage.

   All right.  So that's a bit of hyperbole.  Probably my back cover copy.  But none of that requires any far-future science fiction technology.  I'm not postulating a singularity.  I'm not assuming intelligent androids.  I'm actually not even assuming evil intent on the part of anyone.  The hacks I think about don't even require major research breakthroughs in AI.  I mean, they'll improve as AI gets more sophisticated, but we can see shadows of them in operation today.  And the hacking will come naturally as AIs become more advanced at learning, understanding, and problem‑solving.

   And so first let me generalize the term "hacking," and this ends up being the core of my book.  So think about the tax code.  It's not computer code, but it's code.  Right?  It's a series of algorithms with inputs and outputs.  It's supposedly deterministic.  Right?  Those are the tax rules.  The tax code has vulnerabilities.  We call them tax loopholes.  The tax code has exploits.  We call them tax avoidance strategies.  And there's an entire industry of black hat hackers.  We call them tax accountants and tax attorneys.


   >> BRUCE SCHNEIER: So what is a hack?  So here's my definition.  Something that a system permits but is unanticipated and unwanted by its designers; or, alternatively, a clever, unintended exploitation of a system, which, one, subverts the rules of the system; two, at the expense of some other part of the system.

   All right.  So this is a subjective term.  It encompasses a notion of novelty and cleverness.  It's a subversion.  It's an exploitation.  It's unintended and unanticipated.  Hacks follow the rules of a system but subvert its goals or intent.  That's a computer hack, and that's also a tax code hack.

   And all systems of rules can be hacked.  You can find hacks in professional sports, in consumer reward programs, financial systems, in politics.  Lots of economic, political, and social systems against our cognitive functions. 

So a curved hockey stick is a hack, and we know the name of the hockey player who invented it.  Frequent flyer plan mileage runs are a hack.  The filibuster is a hack, invented in Roman times.  An hold hack, but a hack. Hedge funds, private equity, they're full of hacks.

   So even the best thought out sets of rules will be incomplete or inconsistent.  It will have ambiguities.  It will have things the designers haven't thought of.  And as long as there are people who want to subvert the goals of a system, there will be hacks.  That's my postulate.

   So AIs are becoming hackers.  In 2016, DARPA held an AI capital flag event.  Right?  You know this game.  The mainstay of hacker conferences around the world.  This was done with AIs.  About 100 teams participated.  Lots of qualifying rounds.  Seven finalists faced off at DEF CON 2016, and they had it in public.  There was a stage.  And there were seven computers on the stage, and you would stare at them for, like, ten hours as they defended their network and attacked other networks.  An AI called Mayhem out of Carnegie Mellon won.  It is now a commercial product.

   Now, DARPA kind of weirdly never did that event again, but China has been hosting what it calls robot hacking games every year since.  We don't know a lot about what's happening there.  It's run by the military.  But presumably AIs are getting better. 

And, also, AIs are fighting vulnerabilities in software.  I mean, they're not that good at it yet, but they're getting better.  We know how this goes; right?  The AIs will improve in capability every year, and we humans stay about the same, and eventually the AIs surpass the humans.  A lot of ongoing research there.

   But the implications of this go far beyond computer networks to vulnerabilities in the tax code, to vulnerabilities in the financial regulations, to vulnerabilities in all sorts of systems.  And there are two different issues here.  The first is the obvious one, that an AI might be instructed to hack one of these systems.  And we can imagine some organization feeding an AI the world's tax codes or the world's venture regulations with the intent of it creating a bunch of profitable hacks.  That's one issue.

   The second is that an AI might naturally, be it inadvertently, hack a system.  Both are dangerous, but the second I think is more dangerous because we might never know what happened.  And this is because of the explainability problem, which I will now explain. 

So "Hitchhiker's Guide to the Galaxy," you remember the book?  There's a race of hyperintelligent pandimensional beings.  They build the universe's most powerful computer, Deep Thought, to answer the ultimate question to life, the universe, and everything.  And the answer is?

   >> AUDIENCE:  42.

   >> BRUCE SCHNEIER: And Deep Thought was unable to explain its answer or even tell you what the question was.  Right?  That's the explainability problem.  Modern AIs are essentially black boxes.  Data goes in one end, an answer comes out the other.  And it can be impossible to understand how the system reached its conclusion even if you're a programmer and look at the code. 

And AIs don't solve problems the way humans do.  Their limitations are different than ours.  They consider more possible solutions than we might.  More importantly, they look at more types of solutions.  They go down paths that we humans don't consider, basically paths more complex than the kinds of things humans generally keep in mind.

   So 2016, an AI program, AlphaGo, won a five‑game match against one of the world's best Go players.  This is actually something that shocked both the AI and the Go-playing worlds.  AlphaGo's most famous move was move 37 in game two.  And it's hard to explain without, like, going deep into Go strategy, but it's a move that no human would have ever made.

   By 2015 there's a research group that fed an AI medical information from about 600,000 patients.  And they're testing whether the system can predict diseases.  The result was actually a success.  Deep Patient, as it was called, was able to predict diseases very well, but provides no explanation of how it reaches a diagnosis. 

Now, if your ‑‑ and the researchers had no idea how it reaches its diagnoses.  So a doctor can either accept what Deep Patient says or ignore it, but can't query it for more info.

   Now, researchers are working on explainable AI.  And while there will be advances in the field, there seems to be some trade‑off between capability and explainability, that explanations are a cognitive shortcut that are used by humans and that are ideally suited to the way humans make decisions.  They don't really work in the way AIs now make decisions.  At least in the near-term AIs are becoming more opaque and even less explainable. 

All right.  So now I want to talk about reward hacking.  I said that AIs don't solve problems in the same way that people do, and they will invariably stumble upon solutions that we humans might just not have anticipated.  And some of them will subvert the intent of the system, and this is because AIs don't think ‑‑ don't think in terms of implications, context norms, values, sort of all of the things that we humans do naturally and take for granted. 

And this is a reward hacking.  It involves an AI achieving a goal, but in a way its designers neither wanted nor intended.  And, actually, the examples are pretty great.  So there's a two‑player soccer simulation where the AI realized that instead of kicking the ball into the goal, it would kick the ball out of bounds.  The other player had to throw the ball back into bounds and leave the goal unattended.  There was a stacking simulation, where the AI realized that instead of stacking a block, it could flip the block upside down and get credit anyway because the bottom ‑‑ right, the bottom was on the top. 

There was an evolution simulation, and the AI instead of doing things like growing more muscles or longer legs, it actually grew taller so it could fall over a finish line faster than anybody could run.


   >> BRUCE SCHNEIER: So these are all hacks.  Right?  You can blame them on poorly specified goals or awards, and you'd be correct.  You could point out that they all occurred in simulated environments, and you'll also be correct.  We'll talk about that later.  But the problem is more general.  AIs will inadvertently hack systems in ways you won't expect all the time.

   A story on Twitter I saw of a researcher trying to teach his robotic vacuum cleaner not to bump into things.  And instead of not bumping into things, it learned to drive backwards because it had no bumper sensors back there.


   >> BRUCE SCHNEIER: Right?  Any good AI will naturally find hacks.  If there are problems or inconsistencies or loopholes in the rules and if those properties lead to an acceptable solution as defined by the rules, the AIs will find them.  I mean, we humans would look at what the AI did and laugh and say, well, you know, well, technically it was correct, but you got it wrong.  We would know that it wasn't right.

   So we all learned about this problem as children.  This is the King Midas story.  After the god Dionysus grants Midas a wish, Midas wishes that everything he touches turns to gold, and he ends up miserable and starving when his food, drink, and daughter all turn to gold.  Right?  It's a specification problem.  Midas programmed the wrong goal into the system.

   We also know that genies are very precise about the wording of wishes, and they can be maliciously pedantic when granting them.  But here's the thing, there's no way to outsmart the genie.  Whatever you wish for, he will always be able to grant it in a way that you wish he hadn't.  The genie will always be able to hack your wish. 

And there's an important reason why this is true.  In human language and thought, goals and desires are always underspecified.  We never address all the issues.  We never include all the caveats and exceptions and provisos.  We never close off all the avenues for hacking.  We can't.  Any goal we specify will necessarily be incomplete.

   Now, this is largely okay in human interactions because people understand context, and people act in good faith.  Right?  We're all socialized.  And in becoming so, we kind of learn how to fill in the gaps.  So if I asked you to get me some coffee, you would probably go to the nearest coffee pot and pour me a cup or maybe walk to the nearest Starbucks and buy me a cup.  You would not bring me a pound of raw beans.  You would not buy me a coffee plantation.  You wouldn't also look for the closest person next to you holding up a coffee and rip it out of their hands and bring it to me.


   >> BRUCE SCHNEIER: Right?  I wouldn't have to specify any of that.  You would just know.  Right?  If I asked you to develop a technology that turned things to gold on touch, you wouldn't design it in a way that starved the person using it.  I wouldn't have to specify that.  You would just know.

   We can't completely specify goals to an AI, and AIs won't be able to completely understand context.

   So 2015, Volkswagen was caught cheating on an emission controls test.  This is not an AI story.  This is human engineers programming a regular computer to cheat, but it illustrates the problem really well. 

So the engineers programmed the engine to detect emission control testing and behave differently when being tested, and the cheat remained undetected for almost a decade because it's hard to figure out what software is doing.

   So if I asked you to design a car's engine software to, one, maximize performance while, two, passing all emission controls tests, you wouldn't design the software to cheat without understanding that you were cheating. 

And that's just not true for an AI.  Right?  The AI will think out of the box because it doesn't have any conception of the box.  It won't understand that the Volkswagen solution is cheating; that unless the program has explicitly specified the goal of not behaving differently while being tested, an AI could come up with the same hack, programmers are satisfied, accountants are ecstatic, and because of the explainability problem, no one realizes what the AI did. 

And, yes, now that we know the Volkswagen story, we can explicitly set the goal to avoid that particular hack.  But there are other hacks that the programmers won't anticipate.  The lesson of the genie is that there will always be hacks that programmers won't anticipate.

   And the worry isn't limited to the obvious hacks.  The greatest worry of hacks that aren't so obvious, the ones whose effects are subtle that you're not going to notice, like the Volkswagen hack.

   Now, we're already seeing the first generationists.  I mean, a lot has been written and spoken about in this conference about recommendation engines and how they push people towards a stream content.  And they weren't programmed to do this.  The property naturally emerged.  They learned to push a stream content because that's what people respond to.  And this is important.  It doesn't take a bad actor to create a hack.  And here's a pretty basic automated system that found it on its own.

   So nothing I'm saying here is news to AI researchers, and there are people working on ways to defend against goal and reward hacking.  One solution is that you try to teach AIs context.  A general term for this is called value alignment.  How do we create AIs that mirror our values? 

You can think about the solution in terms of two extremes.  The first one is to explicitly specify what our values are.  I mean, good luck. The other is to have AIs observe human values and action extrapolate.  Good luck on that too.  But that's kind of where the research is going.

   All right.  So how realistic (Inaudible) [00:17:41] anything I've said so far?  The answer is it depends.  The feasibility of any AI hacking depends a lot on the specific system being modeled.  So for an AI to start on optimizing a problem, let alone finding a novel solution, all the rules of the environment have to be formalized in a way that the computer can understand.  When goals -- they're known in AI as objective functions -- need to be established, and the AI needs some feedback loop.  Right?  Some way for it to be told its performance so it can improve. 

Sometimes this is trivial matter.  For a game of Go, this is easy. Right?  The rules, the objective, the feedback, did you win or lose, all precisely specified, and there's nothing outside the system to complicate things.  And that's actually why most of the current examples of goal and reward hacking have come from simulated environments.  Right?  They're artificial and constrained, so AIs can work with them. 

What matters is the amount of ambiguity in a system.  Right?  We can imagine feeding the world's tax laws into an AI because the tax code consists of formulas, but there's a lot of ambiguity in tax law, and that ambiguity is difficult to translate into code, which means that an AI has trouble dealing with it. 

Most human systems are even more ambiguous.  Right?  It's hard to imagine an AI coming up with a real‑world sports hack like curving a hockey stick.  It would have to understand not just the rules of the game, but the physiology of the players, the aerodynamics of the sticks and pucks, and everything else.  And that ambiguity ends up being a near‑term defense against AI hacking.  Right?  It'll be a long time before AIs are capable of modeling and simulating the ways that people work.  And before they're able to come up with novel ways to hack the legislative process.  I mean, you're not going to have AI sports hacks until we have androids actually playing the sport.

   I think probably the first place to look for AI hacking is in financial systems because those rules are designed to be algorithmically tractable.  And we can imagine equipping an AI with all the world's financial laws plus all the world's news and financial information and then giving it the goal of maximum profit.  I mean, my guess is that's not very far off, and the result is going to be a lot of novel hacks. 

But here's the thing about AI.  Advances are discontinuous and they're counterintuitive.  Things that seem to be easy end up being hard.  Things that seem hard end up being easy.  And we don't actually know until a breakthrough occurs.

   When I was a college student in the early '80s, we learned that the game of Go would never be solved by an AI because of its enormous complexity.  Not the rules of the game.  That's trivial.  But the number of possible moves.  And now a computer is a grandmaster.  Some of that was due to advances in the science of AI, but most of it was due to throwing more computers at the problem. 

So while a world filled with AI hackers is still science fiction, it's not stupid science fiction.  So I think it's worth talking about its implications now.

  So hacking is as old as humanity.  And this is a lot of what my book is about.  All right?  We are creative problem solvers.  We are loophole exploiters.  We manipulate systems to serve our interests.  We strive for influence and power and wealth, and hacking has always been a part of that. 

But still no humans maximize their interests without constraint.  Right?  Even sociopaths are constrained by the complexity of society, their own (Inaudible) [00:21:24] impulses, their concern of their reputation or punishment.  They have limited time.  I mean, all of these very human qualities limit hacking. 

Hacking changed as everything became computerized.  Because of their formalism and complexity, computers as a target are uniquely hackable. And, of course, you all know today everything is a computer.  But to date hacking has exclusively been a human activity.  Searching for new hacks requires expertise, time, creativity, and luck. 

When AIs start hacking, everything will change again because they're not going to be constrained in the same ways or have the same limits as people.  They will think like aliens.  And they're going to change hacking speed, scale, and scope. 

Now, speed is easy.  Computers are much faster than people.  A human creative process that might take months or years can be compressed to days, hours, or seconds.  But then what can happen when you feed an AI the entire U.S. tax code?  You know, will it figure out without being told that it's smart to incorporate in Delaware or to register his ship in Panama.  All right? How many loopholes will it find that we don't know about?  Dozens?  Hundreds?  Thousands?  We actually have no idea.

   And it's not just speed, but scale.  Right?  Once AI systems start discovering hacks, they'll be able to exploit them at a scale we're not ready for.  And we're already seeing the shadows of this.  I'm going to mention AI‑generated text.  Right?  AI text generation bots already exist.  And soon I expect them to be replicated by the millions across social media.  Right?  Engaging on the issues around‑the‑clock, posting billions of messages, overwhelming human discourse.  All right?  What we see as boisterous political debate will be bots arguing as bots.  And they'll influence what we think is normal, what we think others think.  Right?  That's a scale change.

   And the increasing scope of AI systems also makes hacks more dangerous.  AIs are already making important decisions that affect our lives.  Decisions we used to believe were the exclusive purview of human decision makers.  AIs make bail and parole decisions.  They help decide who receives bank loans.  Help screen job candidates, applicants for college, people who apply for government services.  All right? 

As AIs get more capable, society will cede more and more important decisions to them.  And that means hacks of those systems will become even more damaging.  And this doesn't subvert the power structure.  I think in general these hacks will be perpetrated by the powerful against us.  It's not we are going to discover the hacks to the tax code; it's going to be the investment bankers. 

And while we do have societal systems that deal with hacks, they were developed when hackers were human and reflect the pace of human hackers.  We do not have any system of governance that can deal with hundreds, let alone thousands of newly discovered tax loopholes.  We won't be able to recover from an AI figuring out a bunch of unanticipated but legal hacks of financial systems.  At computer speed, scale, and scope, hacking becomes a problem that we as society can no longer manage with our current tools.

   All right.  So, finally, let's talk about defense.  When AIs are able to discover new software vulnerability ‑‑ so I'm back on computers now ‑‑ that'll be an incredible boon to hackers everywhere.  All right?  They'll be able to use those vulnerabilities, new zero days to hack computer networks around the world.  Puts us all at risk. 

But that same technology can be used by the defense as well.  So you can imagine a software company deploying a vulnerability‑finding AI on its own code.  It identifies all the vulnerabilities, and then it patches them.  Right?  It could patch all or at least all of the automatically discoverable vulnerabilities in its products before releasing them.  You could even imagine this being built in the software development tools.  It's part of the compiler.  It happens automatically.  We can imagine a future where software vulnerabilities are a thing of the past.  Kind of weird, but that's what would happen.

  Now, the transition period is dangerous.  All right?  The new stuff, the new code is secure.  The legacy code is all vulnerable.  So the attackers have an advantage in the old stuff.  Defenders have a strong advantage in the new stuff.  And over the long run, an AI technology that finds software vulnerabilities favors the defense.

   Now, it's the same when we turn to hacking broader social systems.  Sure, an AI might find hundreds of vulnerabilities in the existing tax code, but the same technology can be used to evaluate potential vulnerabilities in a new tax law or tax ruling. 

And you can imagine that some bill being tested this way.  It could be a legislator, the press, a watch organization takes the text of a bill and finds all the exploitable vulnerabilities.  It doesn't mean the tax loopholes get closed.  There's a lot of politics here.  But it does mean they become public.  They become part of the debate.  Now, again, the transition period is dangerous, but while AI hacking can be employed by the offense and the defense, in the end it favors the defense.

   Now, ensuring that the defense prevails also requires building resilient governing structures.  We need to be able to quickly and effectively respond to hacks.  It actually does us no good if it takes years to update the tax code.  All right?  A legislative hack can quickly become so entrenched that it can't politically be patched.  Right?  Modern software we patch all the time, and we know why we have to do that.  We need to figure out how to have that same kind of agility in society's rules and laws.

   I think this is actually a hard problem of modern governance, and it's well beyond the scope of hacking in this talk and everything I do.  And it's not really a substantially different problem than building governing structures that can operate at the speed and complexity of the information age, and that is something that I think we as a society have to solve. 

But in all of this, the overarching solution here is people.  If you think about it, what I've been describing is the interplay between humans and computer systems.  Right?  The risks inherent when computers start doing the part of humans.  And this is actually also a more general problem than AI hackers and another one that technologists and futurists are writing about. 

And I think in general, while it's easy to let technology lead us into the future, like we've been doing in the past, we're much better off in society if we decide as people what technology's role in our future should be. And this is something we need to start figuring out now, before hacking starts taking over the world.

   So thank you.


   >> BRUCE SCHNEIER: All right.  So that book is coming out in January, which is exciting.  I'm going to take questions.  Supposedly there are mics.  I see a mic.  There's a mic there.  There are three mics I've been told.  I see one of them.  I see a second. 

All right.  So if people want to come for questions.  I will tell you a story.  This is like one of the first talks I've given post‑pandemic.  The first one, it's like a year and a half since I gave a public talk.  I have my notes, because I use notes.  I get on stage, I drop them down in front of me, I look down and say, wow, that type is small.


  >> BRUCE SCHNEIER: In the year and a half, my vision got worse, and I didn't notice it, so that was kind of a surprise.  But I use bigger fonts now and everything is okay.


   >> BRUCE SCHNEIER: So I will ‑‑ I'm going to start this way and go down the line.  Please.

   >> Hi, Professor.  You know, very inspiring in what you shared today, and I appreciate it.  I look forward to read your book.  I just want to ask you if you can touch a little bit what's the relevance of encryption, what you just covered about AI and the future on security.  Like, you know, people talk about huge computers, very strong encryption, but could you say, like, what your thoughts are on the importance of encryption, if catching up what you just told us about AI.  Thank you.

   >> BRUCE SCHNEIER: So encryption has nothing to do with this book.  Like I don't even mention the word at all, let alone cryptography or crypto, which, actually, blockchain is totally a hack of the banking system.  But I didn't use that as an example because, like, too much backstory to fill in. 

I mean, I still think that encryption is an essential tool for privacy, which is an essential property in modern society.  So that hasn't changed.  It doesn't happen to be what I'm writing about in this particular book at the moment.  So things haven't changed.  I just don't talk about it all the time because we've kind of said it all.  It always comes back again. 


   >> So your conjecture that people can't hack the same way that AIs can, isn't that a matter of ‑‑ isn't that a generality?  Because I've found people who can hack in the way that an AI ‑‑ that you conjecture an AI would.

   >> BRUCE SCHNEIER: But we don't know how an AI ‑‑ so it comes from, you know, that ‑‑ what is it?  That rule of 7 plus or minus 2, that humans tend to keep, you know, a certain number of things in mind.  And AIs will not be constrained by the ‑‑ by human thought.  So, I mean, they're already ‑‑ I mean, I think up ‑‑ look you AlphaGo and AlphaZero.  If you're a Go player, look at some of the games.  AlphaGo plays like a freakin' alien.  It makes a move here, a move there, a move there like no human would ever.  And then, like, slowly it coalesces into a strategy.  So it is a fundamentally nonhuman mode of thought in this extremely constrained environment.  And that's the model I think is more relevant.  So I don't think we can say that there are some people that hack in the way AIs do because we actually don't know how AIs hack yet.

   >> Well, so what I mean by that is I've encountered people who come to the same conclusions like the ones you demonstrate, and that's what I think ‑‑ and they can't explain how they got there either.

   >> BRUCE SCHNEIER: Okay.  So I'm not discounting intuition.  So, yes.  But ‑‑ so, again, so I think we humans will stay the same, as we have since, you know, 200,000 BC, and AIs will get better, like they always do.  So there will be a point where there'll be a difference.  I mean, you might be right today.  I'm not willing to bet that long term.

   >> Thank you.

   >> BRUCE SCHNEIER: There was somebody over there, but they must have moved over there.  So yes.

   >> The game of Go is interesting, but as you're talking, I'm wondering ‑‑ I'm thinking as an example, music.  If you fed an AI all the rules of music, western, eastern, and every other, and all of the history of jazz music, I would not expect an AI to come up with a Charlie Byrd solo.

   >> BRUCE SCHNEIER: So we're doing that.  There are music AIs.  And they – you could feed them, like, all of Mozart, and they produced, like, new mediocre Mozart sonatas.


   >> Right.

   >> BRUCE SCHNEIER: And they produce new mediocre jazz.

   >> Right.  So the question is:  Where does creativity come into the factor and to the point where human creativity outdoes AI?

   >> BRUCE SCHNEIER: And we don't know.  So right now it does.  Right now the best of humans is better than the best of AIs.  The best of AIs is better than the worst of humans.  We are a pretty big bell curve.  A pretty wide bell curve.  And I don't know.  AIs get better.  You know, if we were back in 20 years, will we have AIs producing good Mozart sonatas or great Mozart sonatas?

   >> They get better, but they don't get more creative.

   >> BRUCE SCHNEIER: They ‑‑ I don't know that.

   >> Being able to ‑‑

   >> BRUCE SCHNEIER: I don't think we know that.

   >> Being able to hack isn't creativity.

   >> BRUCE SCHNEIER: Agreed.  Right.  Yeah.  No, I get it.  There's a video you can watch an AI learning how to play the game of Breakout.  And you watch it learn.  It's, like, told nothing, and it eventually figures out the rules.  You can see the moment the AI figures out the strategy of having the ball bounce among the back row.  If you play Breakout, you know this. Right?  It figures that out. 

That's the sort of thing I'm looking at.  You are right right now.  I'm not convinced you're going to be right of all time.  You might be, but I'm not buying it. 


   >> All right.  So our value as human evolve over time; right?  What's not acceptable today might be acceptable tomorrow.  Right?  I'm trying to see how AI could ‑‑ with agility on speed, how it could bring us to evolve faster.  And if it does, would that create more polarized ‑‑

   >> BRUCE SCHNEIER: Yeah, I don't know.  So in my book I make the case that hacking isn't necessarily bad.  Hacking is one of the ways systems evolve.  You know, the ‑‑ oh, I don't know, the trespass law is a hack of English common law.  I mean, there's a point where an old law is hacked to create this new right.  And that's a good thing. 

So there are lots of examples, especially when you have hacking as a way to overcome moribund bureaucratic laws.  And so lots of examples of that.  So it's not necessarily bad.  So, yes, hacking is a form of system evolution.  It is a way that systems evolve to changing circumstance.  So I think that could be positive. 

But if we've got to separate the good hacks -- you know, the fiat currency is totally a hack.  Right?  It was hacked to finance a war in France -- from the bad ones, which is, like, Uber hacking every single employee regulations that a city has.



   >> Hi.  So you talked about AI hacking going forward and being able to figure out potentially what someone's medical disorder was.  And I'm wondering, does it take the step further of here is the solution to it?  And do you think that that could or should be applied for finding vulnerabilities in code?

   >> BRUCE SCHNEIER: I mean, it's all very context dependent.  In code, it's often easy.  If you see a buffer overflow, the solution is to, you know, fix the source code.  And we know how to do that pretty exactly.  For medical conditions, I think a lot of it is we know what the treatments are once we can name the disease.  For some things the solution is going to be hard.  There's no one answer to that that I can see.

   >> Do you think it will become necessary for us to, I guess, stay up to date with all of the hacks that AI could find in code?

   >> BRUCE SCHNEIER: Yeah, I don't know if we have to.  Someone has to; right?  But do we have to?  I mean, the nice thing about society is we outsource expertise to various experts, and we don't have to worry about it.  But I think we have to pay attention to what AIs are finding and how they're finding it. 

And there's a whole -- and you look at the explainability in literature, and there are times when you don't care.  If an AI is figuring out where to, you know, drill for oil, if it's right most of the time, you don't care how it figured it out.  But if it's making parole decisions, an explanation is actually part of your right as a citizen to get an explanation.  And if the AI said so is a violation of your rights.  So there it's not good enough. 

For medical, you know, if AIs are good at reading chest X‑rays -- and they turned out to be actually really good at reading chest X‑rays -- you know, if they're right, I don't care how it knows.  If it's accurate, I'm happy.  But if it's deciding who gets into college, we don't even know what accurate means there, let alone willing to trust an AI with fairness.  So it's all very context dependent.  And I think that actually makes this interesting.

   >> Thank you.


   >> Thank you.  I'm curious on your thoughts on this, which is we keep connecting systems together, systems that were often developed to be independent.  And what keeps me awake at night is the notion that we're going to start creating supersystems and feedback loops that nobody anticipated.  You know, a group of people connect A to B.  Another group of people connect B to C.  And then somebody connects C to A without anyone having overarching visibility.

   >> BRUCE SCHNEIER: And then it's SolarWinds, yeah.

   >> Right.


   >> So I'm curious on your thoughts about that and how the AI hacking fits into that.

   >> BRUCE SCHNEIER: So I think that actually is ‑‑ I think it is a problem. When I think about sort of the major problems we have right now, that kind of connection with other systems is a big one.  And it's what you said, it's -- you know, it's suddenly interconnected DVRs and CCTV cameras have a vulnerability which allows someone to drop a domain name system, and that drops a bunch of popular websites. 

And remember the story -- was it 2016? There was a Vegas ‑‑ I think it was the Sands Casino, and they were hacked through ‑‑ God, I'm not making this up ‑‑ their Internet‑connected fish tank.  Right?  Someone connects the fish tank to the Internet, and suddenly their payments network is vulnerable. 

So, yes, that's ‑‑ I think it's a huge issue.  And, you know, how that ‑‑ I mean, I don't know if that directly is affected by AIs, but certainly that's the kind of thing that's exacerbated by hacking at scale.  So, yeah, I think that is something definitely to stay awake about. 

And we're doing ‑‑ we're just connecting systems without a lot of thinking, which means the vulnerabilities just cascade.  Because, remember, it's not just complexity that's bad; it's nonlinear, tightly coupled systems.  All right?  This is Charles Perrault.  I mean, those are the things you look at.  Complex systems that are linear and loosely coupled are fine, but nonlinear, tightly coupled systems, those are the worst. 


   >> Yes, Prof.  In the end, if you removed the metaphor, the AI is still the humans behind it.  All right?  So it's what the humans want to do that the AI is definitely doing.

   >> BRUCE SCHNEIER: So I think that's right.  I mean, all AIs ‑‑ and I think we have this problem in society.  Helen Nissenbaum writes about this, the many hands problem, that when a ‑‑ you know, a team of 100 developers create the software which does the bad thing, we don't know who to blame because there's a group of 100 people.  It's not a person. 

But, yes, right, all AIs will be programmed.  They're controlled by people.  When I think about these AI hackers, I think of, you know, Goldman Sachs running them.  Right?  I think ‑‑ and I think of it being a partnership.  It's not that the AIs find the hacks and make some money.  All right?  The AI comes up with the hack, shows it to a human, and even says, ah, it's not that good or that's a good one.  I'm going to fix it this way.  That was a great one.  Right? 

So I think you're right, you can't lose sight of the fact that AIs are not ‑‑ they're autonomous in tactics, but in strategy and organization and control, there are always humans behind them, and that is really important to remember.  So, yes, thank you for saying that.  I agree with you.

   >> So, Prof, if that is the case, why are we addressing the AI?  Why are we not addressing the human?

   >> BRUCE SCHNEIER: Because we live in the United States, and we just can't actually pass laws that do things that the money doesn't want.


   >> BRUCE SCHNEIER: I mean; right?  I mean that's not ‑‑

(Applause & Cheers)

   >> BRUCE SCHNEIER: That's not meant to be an applause line.  That's just reality.  We are basically ungovernable, and we just have to understand that. 

Yes.  Say something optimistic.


   >> Ah, ha‑ha.  I probably maybe have the answer to this one.  So it seems to me when you talk about, you know, putting AI against AI and eventually the defensive kind of AI will win out, doesn't that assume that all AI are created equal, or is this really going to become a race between my AI is better than your AI?  And ‑‑

   >> BRUCE SCHNEIER: I don't know.  I tend not to agree with the, you know, U.S. versus China AI arms race framing that a lot of people use.  This is not – it's not the '50s. It's not the '60s.  This research happens mostly in the open.  It's not a matter of who has the more surveillance data wins.  It's the type of data. 

So I think there is going to be these AI battles, but I don't think of it as this grand strategy.  And will all AIs be equal?  Will there be some kind of runaway and AI develops it ‑‑ it improves itself and you get the singularity?  I'm really not ‑‑ I'm not making a (Inaudible) [00:43:34] of opinions on that.  I think it's very far off.  I really think in the very near-term specialized AIs doing this kind of problem‑solving, like, you know, getting the Roomba not to bump into things, I mean, that level, or getting the AI text generation bot to produce Tweets that people re‑Tweet.  Right?  And the feedback is how many re‑Tweets you got.  And the AI just gets better at producing Tweets.  I mean, those are the sort of things I'm looking at.  Very specialized.  And, you know, and AI is largely a marketing term.

   >> Hmm, agreed.

   >> BRUCE SCHNEIER: Right?  You know, the classic definition of AI is something that doesn't work.  So once it works, it's no longer AI.  It makes it depressing to be an AI researcher, but, you know, these are just, like, linear aggression models.  These are not complex, right, android, like, on television systems that are doing this.  Yeah. 


   >> Okay.  So following on the premise that AI or whatever these linear models might be are better in hacking our rules, when you think we can expect the rule ‑‑ sorry ‑‑ the rules to be driven or built by models so they become unhackable?  What it means tax law is computer generated, law is computer generated.  Everything synthetic.

   >> BRUCE SCHNEIER: Yeah.  You know, that's a good question; right?  You know, when do the rules produce?  Now, I'm not a fan of ‑‑ well, you know I'm not a fan of blockchain, but not a fan of those sorts of algorithmically tractable laws like Ethereum allows.  I mean, I think there is security value in ambiguity in human systems.  There's security value in systems with adjudication and arbitration and systems of governance, that these are not ‑‑ these are not defects to be coded out of human systems, but they are valuable.  They're part of our security apparatus. 

So I don't know when you ‑‑ it's an interesting question.  And what I'm thinking about is: When do we as a society, cede power to an algorithm?  When do we decide ‑‑ so I'm going to give ‑‑ I'm going to give a parallel.  I think it's a really ‑‑ it's a little far afield, but I think it's really interesting. 

So you and I disagree who should be in power in government.  Now, we could get our armies together and fight it out, but instead we are going to create an algorithm to solve our problem.  We will have something called an election, and we will all vote, and then whoever gets the most votes gets to be in power even though one of us doesn't like that. 

Now, ignore what's happening in the U.S. right now.  Stay abstract with me.  All right?  That is a mechanism where society decided to cede control to an abstract system.  We let the system run.  What the system decides, we agree to. 

So when would we as society say, you know, this computer here, it's going to set the interest rates.  We all disagree what they should be.  Or the tax rates or the employment rate or, you know, or any of those, like, traditional levers that the government has over the economy. 

It seems to me at some point we would decide that instead of fighting politically, we're going to trust the system.  And ‑‑ but how that happens, I have no idea.  I didn't even begin to answer your question.  I know that.  You better ask it again.

   >> No.  It's interesting.  It seems that vulnerability is ‑‑ or deficiencies in systems is what allows progress and things to move forward.

   >> BRUCE SCHNEIER: Yeah, I think that's right.  I think that's true that vulnerabilities in systems gives us the leeway to reinterpret things without going to war over it, and that's valuable.

   >> And when you employ AI to make the systems closer to perfect ‑‑

  >> BRUCE SCHNEIER: Then that would be ‑‑ it's more rigid, and the only way to change it is to overthrow the AI.  Yes, that's bad.

   >> Thank you.

   >> BRUCE SCHNEIER: No, welcome to my dystopia.



  >> Okay.  That leads into my question a little bit.

   >> BRUCE SCHNEIER: All right.  Okay.  It works.

   >> Do you foresee governments ceding control of weapons systems to AI, both weapons of mass destruction and conventional weapons?  And how would AI hacks potentially play a role in it?

   >> BRUCE SCHNEIER: So it depends at what level.  Right?  So a land mine is a weapon system where we've ceded control to an automatic process.  Right?  It's old.  So any kind of those kind of traps.  You put a land mine, you don't know who you're get.  You don't know when you're going to get them.  The ageist system of ship defense has a fully automatic mode.  You flip it, and it fires a wall of lead at whatever is in front of it.  Right?  Control is ceded. 

There are drone delivery systems which have automatic target finding.  Now, they're very tactical ceding of control.  They're not strategic.  But you can imagine us, you know, slowly making that bigger.  Right now, military doctrine in the U.S. is there must be a human in the loop.  But I think a lot of that is illusionary.  Right?  If the AI says here's a target.  Push here if you want to fire the missile, and the military officer just pushes, they're kind of not.  They're in the loop, but not really. 

So this is something I think it needs a lot of thought.  And, you know, unfortunately, this is war.  You know, when the barbaric, horrible things start happening is when your side starts losing. 

So you can easily imagine a war where, you know, these things are ratcheted up out of desperation.  And I don't think that's good.  I mean, I'm not a fan of robot armies.  I think, you know, I think one of the ‑‑ one of the main human backstops to war is that, like, we die.  But if you have war where we have robots fighting in our name, it just changes the calculus of going to war. 

You see that now with the drone wars where the Air Force officers are at Nellis Air Force Base in Nevada, and the drones are over Afghanistan, and, you know, nobody is in harm's way, so it's easy.  And I think that changes the psychology, and that's not necessarily good. 

All right.  I am out of time.  I'm sorry.  But I have to get off stage.  I will be ‑‑ I'm going to go in the back, and I'll be out there.  So come and say hi.  And nice to see you all.  Glad you're hear.


(Music playing)

Bruce Schneier


Security Technologist, Researcher, and Lecturer, Harvard Kennedy School

Share With Your Community