Beware AI Landmines: Legal and Policy Considerations Revisited


Posted on in Podcasts

In 2021, artificial intelligence emerged as a viable technology, which warranted a conversation about the legal and policy considerations underlying modern society. We’ll look back at the ethical, legal, and policy considerations discussed in May of 2021 and ask where are we now? What more needs to be done in order to maximize a successful implementation and minimize potential risk?

Podcast Transcript

Introduction:
You're listening to the RSA Conference Podcast where the world talks security.

Kacy Zurkus:
Hello listeners, and thank you for tuning in. We have a great podcast lined up for today, discussing ethics and AI with my guests, Behnam Dayanim, who delivered a session called, AI Legal and Policy Considerations and Landmines to Avoid at our RSA Conference, in 2021. Today, we're looking at what's changed or is continuing to evolve in AI legislation and policies. And before we do that, it's my pleasure to let you know, that here at RSAC we host podcasts twice a month, and I encourage you to subscribe, rate, and review us on your preferred podcast app, so that you can be notified when new tracks are posted. And now I'd like to ask Behn to take a moment to introduce himself so that we can dive into today's topic. Ben.

Behnam Dayanim:
Great. Thank you so much for Kacy. It's great to be here. And as you indicated, I had the privilege of speaking about AI issues at the last RSA Conference, which was virtual last year. And I'm looking forward to addressing the issue again today with you, and then also in person at the RSA Conference in San Francisco in June. Very excited for that. I am an attorney with the law firm of Paul Hastings in Washington, DC, where I chair the privacy and cyber security practice, as well as the advertising and gaming practice, and also work extensively in FinTech and related regulatory areas and all of which I have occasion to deal with artificial intelligence issues. So it's a, it's a great topic to discuss. And I'm looking forward to talking about it with you today.

Kacy Zurkus:
Yeah. I'm excited to talk to you about it too. And you are such a great supporter of RSA Conference and we are definitely so appreciative of all that you do. I appreciated your session from 2021. And the layered definitions of AI that you provided, one was that they may solve task requiring human like perception, cognition, planning, learning, communication, or physical action. In general, the more human like the system within the context of its tasks, the more it can be said to use artificial intelligence, making things really relatable and understandable there. You also ask the question, what is the big deal? As the lead into recognizing our increased reliance on AI in all aspects of life, we're nine months out and I know that not much has changed, but certainly things are evolving. So I'm curious, Ben, if you can talk to our listeners about your opinion on the ways in which our reliance on AI continues to increase, and whether that is commensurate with our understanding of its impact on security and privacy.

Behnam Dayanim:
That's a great way, I think, to open the discussion, Kacy. Yes. Firstly, in one sense, not a lot has changed in the last nine months, but in another sense, there has been substantial change simply by virtue of the increasing momentum behind the development of different applications that utilize aspects of AI. And as you indicated, there are different ways of understanding what AI means. A broader perspective that looks at AI as essentially a substitute human and then a narrower way of looking at it, which is basically using software to address specific applications or achieve specific outcomes that are human defined, but that are accomplished in an automated fashion without human intervention. And in that narrow sense, industries across the board are continuing to progress and explore ways in which they can utilize those types of technologies. I just, the other day had a conversation with a gaming client.

Behnam Dayanim:
A video game client is looking to utilize AI and through a range of marketing and other applications. Not something you ordinarily would think of. At least I wouldn't, when thinking about potential uses of AI, not particularly anyway, but it shows you how universal the interest is. At the same time, you have governments that also are continuing to exploit or that are developing greater concern about other governments use of AI. So you have both of those parallel streams. On the one hand, you've got the commercial stream, which is really moving ahead quite rapidly and looking at how better to utilize AI to achieve better ROI. And then you have the government side, where from a national security and law enforcement and other set of perspectives, governments are evaluating that as well. And as you might imagine, given the diversity of governmental structures and political systems around the world, coming to very different conclusions. So it is an exciting time. There is a lot going on. Our laws are not keeping up with that. We'll talk about that, I think in a little bit, but AI itself is rapidly evolving.

Kacy Zurkus:
That's so interesting. I never would've thought about AI in marketing. That is a really cool marriage there of sectors that I never would've thought about. Can you talk about this equation between AI and human behavior. In your session last year, you used the self-driving car as an example, to help us understand the process of creating the moral machine. You referenced MIT's crowdsourcing ethics effort, and you asked the question, should we expect more from AI than we do from each other? I would love to start with that as what should our expectations of AI be?

Behnam Dayanim:
So I love the MIT mobile machine and self-driving car example. I mean, it's just, it's the coolest example. It's such a great way of crystallizing the issues that AI can present in a very straightforward fashion. I did use it last year. I'll probably use it again at the presentation in June, because it's just such a wonderful example, right? It's the, just for those listeners who are familiar with it, and I encourage them to all visit the MIT one machine site, it presents you with a number of scenarios where you have a self driving car heading down the road. And there are two lanes in the road. Each lane has some number of individuals, either people or animals, or both in the lane. And you have the choice. Do you want the car to continue in the lane in which it is driving and hit the individuals in that lane or a swerve to the other lane and hit the other individuals. And they mix it up. So you'll have older people, younger people, Nobel prize laureates, convicted felons, children, dogs, cats.

Behnam Dayanim:
It's just really such a great set of scenarios. And you have to make very difficult choices. And the idea behind that of course is, well, you want to take the driver out of the car and put the car in control. How do you want the car to make those decisions? Right? Somebody's got to program the car and make those value judgements for it. And it's not easy. And those are things that when you're driving the car yourself, you just, you don't think about in an intellectual way, right? You're driving the car down the street in that moment. And you're making an instinctive choice. That's informed by a large number of questions, including your own self preservation, by the way. And so it's not something that I think many people have sat down and thought through. What is the right choice in each of these circumstances?

Behnam Dayanim:
And so, that's why when I asked that question, do we really want AI to be human? I mean, do we really want AI to make decisions the way a human would make them? I'm not sure that's the right answer. In fact, I don't think it is the right answer because I know how I would make that decision. It would not be rational at all. And it may or may not be the right decision, but it wouldn't be because I had engaged in a deliberative process to arrive at it. And so we really have to think about that. So that's the moral question, right? You know, how do we think about AI versus humanity in the moral sense? There's also just the pure sense of ethicacy. And again, using self driving cars, I think, is very helpful clarifying that human drivers are not great. We have large, a high rates of accidents. People get tickets all the time. I don't know about you, but those speed cameras get me regularly, right?

Behnam Dayanim:
So when we think about how effective or how a desirable a self driving car would be, what are we measuring it against? Are we measuring it against perfection or are we measuring it against current reality? And that same question applies across the board, not just in the vehicular context, but...

Behnam Dayanim:
Take job applications, which is, you acquire, attracted a lot of attention in recent years, because there have been tools that have been developed to screen interviewees through the initial round, through use of AI, right? And there's been a lot of concern. And in fact, some states, I think I talked about this last year have passed laws that regulate the use of AI when it comes to job interviews to try to protect against discrimination. Very important objective. A very important societal goal. We want to eliminate, as much as possible, steps that discriminate, however, people discriminate. And so when deciding whether a particular AI is an acceptable substitute for a human interviewer, what standard, are we trying to hold it to?

Behnam Dayanim:
So those are really hard questions. They're not easy answers there, but I think we need to be honest in asking ourselves those questions and not contrast an AI solution against an unattainable perfection that doesn't exist today.

Kacy Zurkus:
Right. Right. I've definitely heard about drivers like you that get these speed stops, but not experience that myself.

Behnam Dayanim:
I'm sure. You live in Massachusetts. I think you said Massachusetts drivers are great. I've had personal experience.

Kacy Zurkus:
Right, right. Infamously so, yes.

Behnam Dayanim:
Yes.

Kacy Zurkus:
Behn, you walk attendees through a series of different guidelines, principals, or even proposed legislations, including the FTCs guidance on AI. Where are we now with legislation and where are we going?

Behnam Dayanim:
So we're not really that much further along now than we were nine months ago. I believe there probably have been... Well, we spent a lot of bills introduced at both the state and federal level. So, dozens and dozens, but in terms of laws that have passed, I think there maybe been a handful of states that have passed laws, but not very many. And on the federal level there really has not been much legislative activity at all. There was one bill that passed into law that requires the office of management and budget to establish AI training materials for the federal workforce to help different federal agencies better evaluate the risks that are attendant to the use of AI. But that's really about it, I believe, in terms of legislation. There's some other bills pending.

Behnam Dayanim:
There's a bill that would establish an AI hygiene working group that would direct the federal government to consider the risks associated with AI when acquiring AI as part of the federal contracting process, things like that. And in fact, Senator Gary Peters and Rob Portman have been leaders on those kinds of issues. There have been a number of measures I should say, that have passed that promote AI or that are designed to study the use of AI or study foreign actors, particularly China's use of AI, but in terms of issues that we're talking about there really hasn't been a lot from a legislative perspective.

Behnam Dayanim:
However, the National Institute for Standards and Technology has been directed to develop a framework for thinking about AI and the risks that it presents, and that actually has the potential to be quite consequential. I don't know exactly when they're expected to produce their draft framework. They submitted a request, I believe they put out a request for comments back in July or August. I think it was July of last year, but I don't know what their timeframe is for actually releasing something. But that I think will be the next significant regulatory development. Even though it's not technically a regulation, but administrative development, I would say at the federal level. And it'll be interesting to see what that looks like, how that unfolds.

Kacy Zurkus:
Yeah. It is really interesting to see the way that regulations and legislations are in response to technologies development rather than the forethought of like what's coming down the pipe that we should prepare for. So that when we develop these technologies, we know the answer to these moral and ethical questions of how we want them to impact our lives. Right? You did mention the NST framework and it's interesting. We had another podcast this month. Our theme is focused on AI and ML. And we had Dan Townsend from the Miter Corporation who was talking about their new ARC framework for artificial intelligence in terms of implementing smart cities and development of smart cities. Are there any frameworks other than NST and ARC that are important for developers to be using, if they're thinking about including AI enabled technologies into their products?

Behnam Dayanim:
Yes. So there are a number of self-regulatory types of frameworks or, academic frameworks. But I think the one that is the most significant is the European Unions framework, which actually is a regulation. And which was released in April of last year, so shortly after RSA last year. And that framework does set out some fairly detailed guidance on how AI can be deployed and because of the regulation will apply whenever the AI system is used in connection with a European Union resident, it will impact US companies that operate internationally much as GDPR does in the privacy area. And I think that's a really important framework. What that framework does is it establishes first, a very short list of prohibited practices or prohibited uses for AI systems. And those are for the most part, I think things that one would expect to be prohibited.

Behnam Dayanim:
So the first is the use of AI to deploy subliminal techniques to distort a person's behavior. We all have seen, I think, movies where there have been subliminal messages sent through television or through other technologies in very ominous and forbidding ways. And so I think we can all relate to why that would be a prohibited use. A second prohibited use is using AI to target human vulnerabilities, again, causing people to distort their behavior in a way that would cause them harm. You know, something that preys on the elderly or preys on those with mental or intellectual disabilities, things like that. Again, very intuitively understandable as what would be prohibited.

Behnam Dayanim:
The third and the fourth, I think are quite interesting. The third prohibits the use of AI as a measure for evaluating the general trustworthiness of a person based on their social. Behavior.

Kacy Zurkus:
Mm hmm.

Behnam Dayanim:
With, the element of a social score that would then lead to better or worse treatment for that person or for groups of people. That to me clearly seems targeted at what's been widely reported to be happening in China where the Chinese government uses the social person... I forget what they call it, but something about the social personality score based on observations of individual citizens. And then those citizens receive greater or fewer privileges based upon that score. So the EU framework says you can't use AI for that. And then the fourth prohibited use is the use of real time, remote biometric identification systems in public spaces for law enforcement purposes, except in so far as the use is strictly necessary. So the exception may end up swallowing the rule we'll have to see, but that obviously has been a topic of a lot of controversy here in the states, too, where with some municipalities and others saying we don't want our law enforcement agencies using biometrics in this way. So that's interesting. And those are the prohibited areas.

Behnam Dayanim:
And then the regulation sets out a long list of high risk AI systems. High risk AI systems really are the focus of the regulation. And if you are working in a high risk system, then you have to comply with a number of safeguards to ensure that your AI system complies with the fundamental principles that the EU has viewed as important. And those are things like systemic and continual risk management and analysis of data to ensure the system is operating the way it's supposed to operate. Data quality and data governance, record keeping, transparency as to how the AI system operates and what its objectives are. So that third parties, including the objects of the AI system, the consumers or employees, can understand what it's supposed to be doing. Human oversight with a right to appeal adverse decisions, cyber security, things like that.

Behnam Dayanim:
Again, a lot of those are elements that you see in other frameworks, but then now are laid out for the first time in a binding regulation. And then of course there's low risk AI systems, which really are not subject to very much control at all. But the bulk of what people are really interested in and are concerned about would be these high risk systems. And then the teeth behind this regulation are that the violation of the framework can result in significant penalties, including fines of up to 6% of worldwide annual revenues. So again, it's very much like GDPR in the way that they are approaching this. And so I do think developers with companies of any size, with any international scope, that touch on Europe need to be aware of that and be building to comply with it.

Kacy Zurkus:
Yeah. Yeah. And I know that one of the other concerns related to AI, that you touched upon in your talk and even alluded to in this conversation, is the ability to combat bias in AI and accountability. But that it's mostly ethics that is driving the accountability rather than legislation. So I'm wondering if this has evolved at all under our new administration here in the US and in what ways? Or has it, or in what ways does it need to?

Behnam Dayanim:
It's true that it's ethics that is driving accountability and AI, if you focus on AI specific regulation, because there isn't really very much, right? And so from that perspective, yes, it's ethics driven. But when you talk about bias, there are general prohibitions on discrimination. And if you violate those prohibitions, whether you're doing it directly or through AI, you can be held responsible. So we have not seen any federal initiatives of which I am aware anyway, on this front specifically, but that doesn't mean that you are free to be biased or free to discriminate through AI. It just means that general principles would apply. And as I think I mentioned in my talk last year, federal regulators have emphasized that point and said, essentially what I just said, that you can't discriminate simply because you're using AI.

Behnam Dayanim:
At the state level, there have been some state laws. In a few states that, I think I believe Illinois is one and I'm sorry, I don't remember the other off the top of my head, that have passed legislation dealing with discrimination in AI in the context of employees and job applicants. But we have not seen a lot of federal movement there yet. And we haven't seen any binding regulations that are AI specific. We just seen things like the FTCs guidance, that as that I talked about last year, that make that point that general principles still apply.

Behnam Dayanim:
You know, a good analogy, in the crypto world often you'll hear people. Less so now than a couple years ago, but even now, every once in a while, you'll hear people make the contention that well, general laws that apply, don't apply to crypto. So even though, X may be illegal, if they were using dollars, it's not illegal with crypto, which is totally not true. Right? Same, same laws apply. You can't escape the laws by moving into crypto. Similarly here too, you can't escape the general laws by moving into AI. You know, if it's illegal for you to discriminate against someone in person, you can't do it through a machine.

Kacy Zurkus:
Yeah. It's such a fascinating conversation to me. And, and I keep going back to, with every response you're giving, I keep going back to this question of what's the big deal and do we want AI to be human or even better than human, right? And this question of is that even possible to achieve, because while there are outright laws against discrimination, discrimination can also be so very subtle and bias can be so very subtle. Right? And so how do we detect that? And even more so what are some of the security and privacy implications of using AI enabled technology?

Behnam Dayanim:
Yeah. This is really an interesting aspect of AI that I think really in some ways is among the most significant. And it's because one thing that AI does, is it enables a larger pool of data to be useful, right? There so much information about us that's out there. And part of what has made us relatively safe, relatively private. I mean, you know how private we are to you, it may be matter of debate, is that it's not all usable in some ways, right?

Behnam Dayanim:
If you think about the way things were 50 years ago, we all had our information in file cabinets and the file cabinets were distributed across, who knows how many different institutions, companies, universities, government agencies, former high schools, doctors' offices and all out there. And it was all, if someone could grab a hold of it comprehensive and could paint quite an interesting picture about us, but it was in paper and was in a bunch of different file cabinets and so nobody worried about it because he was going to be able to go around and pluck those folders out of all those file cabinets and then digest all that information and do something with it that was harmful to us.

Behnam Dayanim:
Well, that hasn't been true for a while. And with AI, it's becoming even less true because of AI. The software has the ability to digest and draw inferences from such large volumes of data that it does raise potential concerns about individuals privacy in ways that really haven't been as directly presented previously. And so I do think the at is the most significant privacy related implication of AI. And then of course the related security piece of that is if all the data centralized in one place, then all the data centralized in one place. So that creates potential security issues. And, and that's something, again, I think we need to think about it, but I also would, not to sound a repetitive, make the point I just made a moment. The same laws apply, right? GDPR applies to AI CCPA, the California's Consumer Privacy Act and CPRA, which will take effect next year, apply to AI.

Behnam Dayanim:
You can't circumvent those obligations by delegating them to a machine. And I think that's really important to keep in mind and as led legislators and regulators think about whether new legislation is necessary or new regulation is necessary to deal with AI. They should remember that we have legislation and regulation. And, my suggestion is that generally you should look to regulate or legislate the underlying issue, discrimination, privacy, security, rather than the medium by which that issue is raised, which in this context is AI. Because if you try to address it through the medium, rather than through the objective, you'll find that your legislative or regulatory approach is outdated the day after it's released. And that's a problem we have often with our legislative and regulatory structures. And it's one, I think that, if we focus on the objective we can help to avoid.

Kacy Zurkus:
Right? Yeah. I had an old colleague when I was a teacher and she used to say, we need fewer rules. The more rules you have, the more opportunity there are to break them. Right? So have that underlying rule that is applicable in a variety of situations. And then you don't have to worry about too many rules being broken. Behn, thank you so much for joining us. This has been such a great conversation. I love talking about ethics and morality and it's intersection with technology. So this has been great for me.

Kacy Zurkus:
Listeners, to find products and solutions related to artificial intelligence we invite you to visit RSAConference.com/marketplace. Here. You'll find an entire ecosystem of cybersecurity vendors and service providers who can assist you with your specific needs. Please keep the conversation going on your social channels, using the hashtag RSAC and be sure to visit RSAConference.com for new content posted year round. Behn, thanks for joining us.

Behnam Dayanim:
It was my pleasure and I hope to see many of the listeners in San Francisco in June.

Kacy Zurkus:
Be well, everyone.


Participants
Behnam Dayanim

Partner, Global Chair of Privacy & Cybersecurity Practice and Chair, Advertising & Gaming Practice, Paul Hastings LLP

Machine Learning & Artificial Intelligence

artificial intelligence & machine learning data security ethics governance risk & compliance law privacy


Share With Your Community