Posted on
in Presentations
When one thinks of industries targeted by malicious actors, the music recording and performance industries don’t always spring to mind. But the music industry has been forced to grapple with evolving technology just like any other business. This ensemble of leaders from the music industry, law enforcement, and academia will discuss artificial intelligence and cyber-enabled threats to the music industry.
Video Transcript
>> ANNOUNCER: Please welcome panel moderator, Herb Stapleton.
>> HERB STAPLETON: Well, good afternoon RSA. Thank you so much for having us here. It is really a privilege to be here with you today for our panel on Face the Music: AI and Cyberthreats to the Music Industry.
So, I just want to start off by giving a brief opening comment. I just want to say, you know, technology obviously has opened up the world of music to millions of listeners all over the world. And along with that, of course, come new exposure to risk for the industry, whether it's sort of music piracy or ticket fraud. And now what we look at today is synthetic content and artificial intelligence that threatens artists' intellectual property but also just really opens up a whole new threat world that didn’t exist just a few short years ago.
I think if you had asked this panel if we would be sitting on this stage at this time last year, none of us probably would have anticipated that this was going to happen. And I think that's the theme that you are really going to hear a lot about today is how do we anticipate what's going to happen next and is that even possible?
And so, I have with me today a panel of true experts in their respective fields. I just want to take a brief moment to introduce all of them to you.
So, first Katherine Forrest is one of the nation's foremost advisors on legal issues related to technology, which includes emerging areas like AI and Big Data.
Hany Farid, down on the far end, is a professor at the University of California at Berkeley. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.
And then last but not least, a Kentucky born, eight-time Grammy, fifteen-time CMA, ten-time ACM award winner and one of the country’s most respected and beloved musicians, most recently named CMA male vocalist of the year for the sixth time, setting a record for that particular award, and really I think his greatest accomplishment being my favorite brother. Chris Stapleton.
>> CHRIS STAPLETON: I'm also his only brother.
>> HERB STAPLETON: So Chris, let's jump right into it. And I’ll start off a little bit with you. I think, you know, when you and I, when I was a young FBI agent and you were pretty – probably pretty new in the music business, the threats to the music industry were along the lines of sort of the Internet age was starting to rise and there was peer to peer sort of sharing of MP3s and things like that and there were some concerns obviously, that came along with that technological advancement.
So, can you talk a little bit about the how the industry responded to that historically?
>> CHRIS STAPLETON: Well, they were slow. Extremely slow. And they tried to shut it down and tried to make it go away. Tried to make it, I don't know, your perspective too, and I – I think that was a mistake. I think probably in hindsight, all the things that they tried to shut down are now the delivery systems for music for the most part. So, I don't know. As we're moving into this AI thing, there has got to be a different approach.
>> HERB STAPLETON: I think it’s interesting, like, you know, law enforcement also responded to that particular threat. But what we learned was trying to regulate that from a sort of criminal perspective was also not efficient, right? It wasn’t – despite some efforts to throw up a splash page or shut down a particular type of website that maybe was technically committing sort of illegal conduct, it wasn't really effective, and honestly, like, the development of the technology really overwhelmed the efforts to regulate it in a lot of ways.
Katherine, I'm kind of interested in your perspective about changing legislative climates and how that has sort of, looking back historically and maybe even bringing us to the present, how has that climate evolved?
>> KATHERINE FORREST: Yeah, you know, it is really interesting because we are in a unique moment right now with this technology, with generative AI, where I say we are at the beginning of the beginning of the beginning. And in three years, if you ask me that same question, we are going to have a whole different set of discussions.
But over time, the law has really been able to – the common law has been able to evolve to take on new challenges. I know it was slow but eventually it got there and it got there by originally there was a whole issue as to whether or not music that was being peer to peer file shared, whether or not that was in fact copyrightable, whether – was it a tangible medium of expression. The law then eventually got itself around to the point where it said yes, it was copyrightable and it was copyright infringement. And then there were models that were then developed that were appropriate licensing models.
Here, the issue we have is one of speed because we don't have a lot of time. Right now, we don’t have the years that we took with regard to the Internet to try and figure this out through cases that are going to move their way sort of slowly through things. And with the legislature, we don't have time. We don’t have a lot of time.
So, we have got to be able to speed up what has been the traditional process to deal with innovative new technologies and be able to take that faster and further and be able to anticipate where we're going to be in just a few years.
So, what I would say is we do have the ability to evolve. The common law has demonstrated that to us. But we don't have the luxury of time with this one. And that I think is the big difference. So, whether it is going to be through a legislative process, through a judicial process, through a synergistic combination of the two, we have got to do it and we have to do it now.
>> HERB STAPLETON: So, it's interesting when you talk about how we’ve evolved. I think that brings us to an interesting question about the current risk and threat landscape. You know, when we look back at the threats that faced us five, ten, fifteen years ago, we’re talking about really what we would just kind of think – I mean, it’s a technology that's on everybody's phone, every device, everywhere. And to your point, moved not at the rate of speed that we're seeing AI move at. So from a complexity standpoint and a speed standpoint, we're really playing an entirely different ball game.
So, Hany, I am interested; could you give us sort of an overlay of synthetic content and how widespread it is on today's Internet?
>> HANY FARID: Yeah. So, we should start by sort of thinking about what is generative AI? We’re using that term a lot. And it is everything from ChatGPT, which I think – I haven't had a conversation in the last two months where that hasn't come up.
Midjourney, Stable Diffusion, DALL-E, which is text to image synthesis, deep fake videos, and of course, audio synthesis. And what – what generative AI is, is a system for generating text, audio, image, or video that is wholly controlled by a computer.
That's actually sort of true because underneath that is a lot of human generated content. So, in many ways I think what we're seeing is not an AI revolution. It is a data revolution. You started off by asking Chris about the early days of the Internet. The reason we are here today is because of that. Because we have been uploading text and audio and images and videos for the last twenty years and at some point we realized that we have so much content that the systems themselves can now regenerate what we thought was a uniquely creative process of humans because it has seen so many, so many different examples of it.
And don’t get me wrong; there are algorithmic and mathematical insights that have happened that have allowed this. There is of course massive computing power that have allowed it.
Now to your question, where is it? It is everywhere. It is everywhere. And it is spreading very fast. We were talking backstage that we are going to look at the last twenty years and wonder how it went so slowly compared to where we are now. And it felt like it was fast but I don't think it was.
So, you can go right now to DALL-E, you can go to Midjourney, you can download Stable Diffusion on your computer. You can go to – I am not going to give the name of it because I don’t want people to use it, voice cloning software. There’s code you can download to generate fake video. And we are seeing it everywhere.
And what’s so interesting to me about this technology is that it is fully democratized, that you don’t need a lot of technical skills to use this and I think that’s sort of that interesting threat that we're facing.
>> HERB STAPLETON: So, let's dive in a little bit about how that sort of ubiquitous availability of that technology and then really kind of the low barriers to entry to using a lot of this type of technology really affects the industry that we're primarily here to talk about today, the music industry.
Chris, I have a little bit of insight into the amount of yourself that you pour into the creative process, whether it is writing songs, preparing performance, rehearsing those performances, getting up on stage and doing a performance. And so, I'm interested in your perspective on, based on what Hany said, like how do you think that that affects not just the creative process where you may put so much of yourself into it, but a person can do something maybe not the same but similar with sort of just using a tool on the Internet? How does that sort of affect the way that you see the creative process?
>> CHRIS STAPLETON: For me personally, I don't see it changing how I approach what I do until somehow it becomes antiquated. But I don’t know. That’s difficult to say. I think it’s hard for me to say because it hasn't been something that's crept into what I do for me personally. Maybe it is for Drake, obviously, and some of the things that have happened in the last week. But it hasn't happened to me in a way that I have to really look at and approach it. And I don't see that even if it did, that I would change how I approach things any differently than what I do now.
I use, you know, technology from the 60’s to do a lot of the things that I do. So that doesn’t – they don't necessarily integrate.
But yeah, I don't know. We'll just have to see. I don't know how that affects that.
>> HERB STAPLETON: Is this similar in some ways in your mind to sort of dealing with like the bootleg merchandise kind of problem that sometimes you have people making sort of knockoffs of merch that you may produce? How is it similar and how is it different to that kind of problem?
>> CHRIS STAPLETON: It’s similar – you know, granted we’ve discussed, it hasn’t happened to me but it will, it's going to happen. It's going to happen and there is not anything I can do about it. We can't like eliminate the technology. It is out of the box. We can't put it back in the box. A lot of musicians would like to, I'm sure. But you know, maybe there's a thing where we can integrate with it but also we can make people aware that there are fake versions of ourselves and the legislators can help by making sure that these creators have to say that they are fake versions of things or in the style of or something like that.
You know, I think there is a synergy that can happen or certainly a – we can all play nice together if we want to. Or we can set the whole thing on fire. I don't know. It is really up to a lot of people, a lot of moving parts and compromises.
>> HERB STAPLETON: So, Hany, Chris is an expert in the music industry but this doesn't just apply to the music industry, right? How does it look across the landscape of other creators?
>> HANY FARID: So, I teach at UC Berkeley, as you mentioned, across the bay here, and I can tell you that every single student who is graduating is absolutely petrified of the impact that AI is going to have in their careers, in their very young careers.
So, where are you seeing the disruptions? Absolutely in the photo journalism space, in the graphic arts space. You can now type anything you want limited only by your imagination and it will synthesize an image for you. So, the Getty Images of the world, for example, photographers, graphic artists.
I think the Hollywood studios should be getting very nervous as well as the actors and actresses because we are soon going going to have fully synthetic performers that will not have to get paid the multimillion dollar salaries.
You are also seeing the disruption in the computer science world and computer programming. If you haven’t gone to ChatGPT, you can ask it to write code for you. And I can tell you, I teach intro to computer science. There isn’t a single homework or exam problem that it cannot solve absolutely perfectly. That's amazing.
It is impacting the legal field. It is impacting the medical field. It is impacting just about everything. And I think you are right that this is the early days of the early days of the early days. And I think that something interesting is coming. I think Chris is right, there is no putting this back. And I think you were right as we better start dealing with this soon.
>> HERB STAPLETON: Is there a – and this is really for any of the three of you in your own experience, but is there any particular industry that you think is leaning into this challenge more so? Like is it more prevalent in the movie industry, that they are leaning into it a little bit more? Is it more prevalent maybe in the legal field or in your field as an academic? Is there any of that?
>> CHRIS STAPLETON: I had dinner with an actor friend of mine and he said they were filming, and alongside the screen that they were watching him do his performance, they had another screen where they were watching him twenty-five years younger doing the same performance. So, I think the movie industry is certainly embracing it, probably more than the music industry is. That's just my two-cent perspective.
>> HANY FARID: I think Chris is right. The movie industry is absolutely integrating. I will tell you also, I think the technology sector is. There is nobody who is not using copilot or ChatGPT to write code. You are out of your mind if you are not using this anymore. So, I think you are going to see that very quickly and it’s going to make people more efficient and faster and more creative in their process. I can't speak to the legal community.
>> KATHERINE FORREST: One little sort of cautionary note on that because I have a lot of clients who ask me all the time how can they use it, how can they use it safely?
>> HANY FARID: Well, I didn't say they were doing it safely.
>> KATHERINE FORREST: And there are two different questions, right? Saying you are out of your mind if you are not using it to write code, but, you know, as we saw in a public news article about what happened with Samsung not too long ago, there are real sort of issues. And you have to be extremely thoughtful about what the use case is with your particular business as to whether or not this kind of technology can be used safely in your business because of the confidentiality concerns, because of the accuracy issues. But where we are with those kinds of issues, again, is going to be different over what I'm going to consider to be a near-term horizon.
So, there are use cases now for sure. This is an incredibly interesting technology and moment that we’re in. But there are also some real concerns that ought to slow us down in terms of widespread use and adoption in confidential areas where accuracy is also an issue.
>> HANY FARID: Let me just expand on that. So, one of the concerns, and I think this is the case you are referring to, is not only is ChatGPT writing code, but it will also debug code. You can upload your code and say can you find the bug for me? But if you are uploading proprietary code, you have just publicized that code and that's true whether you’re a lawyer saying please summarize this brief or a doctor saying please analyze these images. You are uploading these things into the cloud. It is getting reintegrated into the system and we are giving up huge amounts of privacy on top what we have been doing for the last twenty years as well.
>> HERB STAPLETON: Excellent. Well, let’s bring it back to the music industry for just a second. Chris, you know, one of the things that we’ve discussed a little bit is about how, you know, AI can lead to generating content that's put out there on the Internet. But a big part of what you do is live performance. And so, I want to go to you to talk a little bit about how you think that in some ways maybe protects artists like you from the AI risks. And then I want to go to Hany and talk a little bit about where that could maybe go next.
>> CHRIS STAPLETON: Well, I think in the moment that we are in, and you can correct me with technology, it’s beyond what I'm saying here, but in the moment that we’re in, I don't think there is like an AI performance that you can go watch that somebody really wants to buy a ticket over buying a ticket for Drake or buying a ticket for what I do or whatever. You know, but there is protection for us.
But you know, if we're creating AI content, I think it is really dangerous for younger performers if people are just going to the AI thing automatically too. I think about younger performers coming up who don’t – aren't established. That scares me for the live version of that because they have to be able to get content out that's not AI versions of themselves to make people come watch them live. So that's a scary part for me. I don't feel scared in this moment right now on this stage about what I do performing live.
>> HERB STAPLETON: And Katherine, is there a legal perspective on that as well? Then Hany, I’ll come to you.
>> KATHERINE FORREST: The one thing that I would say is I think that some of the areas that are in a way most protected are where there is a particular human element that is special and unique and it is the human genius that is live and on stage where we don't yet have the ability to have an avatar that's going to be able to replicate you with generative AI just as you are right now.
But I also wanted to add that that same thing is true for say, live theater, where there is a kind of energy with the human who is actually there at that moment.
But just for one moment, I want to separate out our conversation so that we can deal with it in two ways, which is the output with generative AI, which is a lot of what we're talking about, what's coming out of – on the other end, versus also the input. Because there are also risks to a lot of creators and content makers to their content being used on the input side, irrespective of what the output side is. So, we'll call it the training side.
So, as we talk about this and talk about the risks, there are risks on both input and output for I think all kinds of creators, whether they would be musicians and writing lyrics or musicians who are creating music.
>> HERB STAPLETON: We have seen action on both the input and the output side just within the last couple of weeks in the music industry, right? Like, Chris referenced it earlier, but there was an AI generated Drake song that was requested to be taken down on the output side. But also, in the news is Universal Music Group asking certain streaming platforms not to utilize their content for training of AI. So, we've seen, you know, like some early, very timely rumblings of that.
But Hany, you can't recreate live performances so we're safe, right?
>> HANY FARID: Yeah, I would like to say the answer is yes, but I don't think it is. So, let's assume that Mark Zuckerberg's Metaverse dies the death that it deserves.
>> KATHERINE FORREST: Well, I'm actually a Metaverse proponent.
>> HANY FARID: Let's assume I'm right.
I think a lot of people believe that augmented reality is coming. So, that is a display that will replace the phone. So then instead of always doing – walking around the world like this, we will be wearing a pair of glasses, not different than yours, and we will be able to super impose onto what you are seeing in the physical world, digital recreations. That is not too far off from a live performance.
You can imagine going anywhere, going right here and having somebody performing on the stage, and it is completely being presented to you in this augmented reality.
Realtime holograms, they’re probably not that far off. So, you know, will we replace the magic of Chris on stage and real performers? I don't know. But from a purely technological point of view, that blending of the digital and the physical world is coming and coming and coming. We’ve seen it, right? We went from mainframes to the desktops to the laptops to the mobile devices, and so this is next, right? Now, whether it’s VR or AR or XR, what exactly it is, I don’t know, but I think that’s coming.
Apple is about to announce their AR system I think in the next month. We will see what that looks like. So, I don't know. Probably in our lifetime, I think we're okay, Chris. I don't know about your kids.
>> CHRIS STAPLETON: Yeah.
>> HERB STAPLETON: Just as a caveat for the crowd, everyone up here is a real person. There are no holograms on the stage.
>> HANY FARID: Talk for yourself.
>> KATHERINE FORREST: It’s all a simulation.
>> HERB STAPLETON: We tested that out in the green room, so. Great.
So, I want to transition just a little bit over to a little bit of discussion about the legal landscape that we face. And this really certainly applies to the music industry, but it also cuts across some of the other industries that we've talked about.
I think, you know, from the FBI perspective, one of the things that we really look at from the synthetic media are generative AI kind of matters. It is not always clear in these instances that there's even a violation of the law in the stage at which it's being developed.
In the stage at which it is later being used, there may be violations of the law, of course. So, if AI is used to create something that is incorporated into a major fraud scheme, for example, obviously the fraud is against the law but the creation of the technology that sort of facilitates the fraud is not necessarily against the law depending on sort of intent and things like that that go along with it.
Same thing with some sexual abuse type material that we see. We see this type of technology used in child sexual abuse cases which are sort of the sort of most horrific things that we are confronted with in the FBI.
And so, I'm really interested to hear from the panel, and Katherine, we will start with you, some of the distinctions that we can draw between what we see here in the music industry and what the entire threat landscape can be from a legal perspective.
>> KATHERINE FORREST: Yeah. You know, it’s a complicated, complicated landscape. And I want to sort of take it in two pieces.
One, we have got the form of generative AI that we've been talking about in terms of music but then also the deep fake generative AI that is really some of what – of the worst of the sexual exploitation that's been occurring.
And the legal landscape – I want to back up a moment just very briefly and say – you had made a comment, Herb, that it was at the training part or the development of the technology part, there might not be an issue. And that's a very hot debate right now in the legal landscape as to whether or not the utilization of content that is copyrighted or protected under a variety of schemes, whether the utilization of that is in fact itself an infringement. And there is going to be a lot of work that will be done in the legal space to resolve some of these very important questions.
But the way that right now we find that the laws can apply in terms of some of this, the legal landscape, are that deep fakes and generative AI will be dealt with in terms of first, copyright infringement. Okay, that’s the generative AI piece. But in terms of some of the sexual exploitation, we do – there are laws starting to appear on the books of some states and federally to deal with some of that deep fake technology.
But there is also a lot happening right now in the name and likeness space and whether or not using someone's name and likeness – and likeness can include voice, not only the physical presentation of an individual, but also voice – whether or not that can actually also result, and in many places and in many states, most states, it can result in a legal infringement. But there is also, as you just mentioned, Herb, fraud. You can defraud people with pretending that things are what they are not and that's another avenue as well as what we call a Lanham Act violation, which is passing off violation.
So, there are a variety of legal tools right now that exist pre-legislation, pre-new legislation, while we are working on some legislation to try and begin to address certain problematic uses of this technology.
>> HERB STAPLETON: Hany, what are you seeing in your research?
>> HANY FARID: Yeah, so, where we have seen the weaponization of this technology, first and foremost, in fact, where the term deep fake came from is nonconsensual sexual imagery. People take the image, primarily of women, they insert it into sexually explicit material, and then carpet bomb the internet with it and that can be politicians and lawyers and actors and whoever, somebody who just, you don’t like. And it’s awful.
And it is true that there are some state laws of varying degrees of efficacy, there is work at a federal level to do that, and there is also work in the EU to regulate it. That has been ongoing for years and years and years and it has been particularly awful for women.
We have seen consistently a rise in the use of deep fake and generative AI in fraud. It started off with videos. There was an amazing case in the early days of the Ukraine invasion where the mayor of Berlin, Madrid, and Vienna each individually spent twenty minutes talking to the mayor of Kiev. But in fact, it was a deep fake. In real-time, video and audio of what looked like the mayor of Kiev. So, we are seeing geopolitical fraud. We are seeing financial fraud. Everything from large scale, tens of millions of dollars, to small scale.
The new fraud over the last few months has been phishing scams where you get a phone call. Mom, Dad, I'm in trouble. I need help. I’ve been arrested. Send $5,000 in bail. Turn it over to the scammer and now we’re stealing peoples’ identity.
Absolutely, you’re going to see it on the disinformation side. We have an election coming up. It is coming. If you didn’t see it this morning, the RNC released an ad, an anti-Biden ad, and the whole thing had generative AI images in it speculating on what the world would look like because they don’t like Joe Biden.
So, I think on disinformation, on nonconsensual sexual imagery, on fraud, we are seeing big problems. And I think what's going to be super interesting here – and is whether section 230 of the Communications Decency Act protects companies from this content, because this is not in the traditional sense, third party content. It is generative. And I think their defense on the 230 side is going to collide with their defense on the copyright side because they can’t have it both ways.
So, I think unfortunately, when you go to the Hill and you talk to the regulators, they don't really understand any of this. And so, they are very, very slow to sort of come to grips with what needs to be done.
And if you didn't see it, there was a Supreme Court case earlier this year in Gonzales v. Google and I swear to God, I think the Justices were begging Congress to do something about this. Now, whether they’ll respond or not is just another question.
>> HERB STAPLETON: So, you mentioned the RNC ad and a couple of other things, the kidnapping schemes that we see. I think we have also seen some of this type of technology utilized in what we used to think of as sort of email scams where somebody actually can follow-up with a voice that's reproduced that sounds like the sender of the email.
This is a lot of, you know, sort of fairly innocuous technology in some hands that could just be used to create pretty pictures, but in the hands of bad actors, can turn into something that can cause real global damage.
>> HANY FARID: Yeah, I think that's right. When you look at ChatGPT and the generative images, you can say, okay, some harm can come from this. But you can see a lot of interesting and creative use cases.
When it comes to voice cloning, I am hard pressed to imagine the scenario where I need to clone somebody else’s voice. It is hard to image why you’d create a website where for five bucks a month I can upload anybody's voice, a couple of minutes, click a button that says I have consent to use this voice, and then I can type and generate them saying anything I want. So, it is hard to understand where that use case is and why companies are producing this technology other than they can.
>> KATHERINE FORREST: Right. We were talking about this backstage because there are some – there are some use cases that are law enforcement related where voice patterning can be used for crime identification, criminal identification. Some very sophisticated now utilization of more than 1,500 parameters in the human voice and the face in terms of how sounds are made that allow you to actually construct a physical image of a person based upon their voice itself.
So, this is really fascinating and very useful stuff. There are also some smaller commercial uses for voice patterning recognition, putting aside sort of audio deep fake cloning, but voice patterning recognition where you are able to then use it, for instance, as a biometric to try and be able to recognize people with banks and other places where you don’t want to have to go through passwords, you want to go through just their voice.
So, there are some – there are some use cases for it but I agree with you that it’s a little further afield when you are allowing just anybody to take anybody else's voice.
>> HERB STAPLETON: The first technology you described puts sketch artists everywhere out of business, right? Like sketch artists need to be on notice. Of course, you are not using a witness's memory to recreate that anymore but you are using the voice of the actual person that you are interested in.
>> KATHERINE FORREST: Right. And you are only putting a sketch artist out of business if you have got enough of a voice and it can't be interfered with too much. So, if you’ve got a bunch of voices all at once, you might have a hard time separating them out, but.
>> HERB STAPLETON: Fascinating. Katherine, can you talk a little bit from a legal landscape perspective about the Fair Use Doctrine and how that sort of applies to these scenarios?
>> KATHERINE FORREST: Right. You know, in the United States and also in Europe, there are doctrines which say that there are certain uses of copyright protected works that are fair and that don't require compensating the owner. The typical one that we most often think of is sort of for educational use, that allows people to copy a small piece of a textbook, for instance, to be used for a not for profit educational use.
But the Fair Use Doctrine comes into play after there has been an infringement. So, sort of assume for the moment that there’s been a copying. Then you look at whether or not there is a defense to the copying. And it’s is a four factor test. Each one of the factors is looked at. It’s very fact specific. No one of the factors is dispositive.
But for instance, the very first factor is, is it a commercial or not for profit use? So, you can think about that. And but in certain case law, that word, there has been a word inserted there which is transformative and so that ends up being a factor in play.
But I want to go to the fourth factor of the Fair Use Doctrine which is what's the impact for copying someone's work with generative AI, for instance? What's the impact on the market?
And generative AI is a complete replacement for the market. It's something that we've never really had in the Fair Use Doctrine where you’ve got a full copy where there is also not just a diminution in the market but the goal is a complete replacement. It's sort of one and done if you think of it that way. If you have got it once, if you have got the copyrighted work once, you may never need it again.
So it's – the Fair Use Doctrine is going to be – there will be a lot of cases, including some that have already been filed that will start to work through those factors with music and other kinds of content.
>> HERB STAPLETON: Hany, I want to go back to something that we just briefly touched on earlier in the conversation, but I would like for you to give a little more context around do you think there are different considerations for creators in the input side versus the output side? And how would you describe those?
>> HANY FARID: Yeah. I mean, I think what's so interesting about this conversation and sort of the time we're living in is that yes, it’s all about AI and this new technology, but in many ways it is really about the last twenty years. It's about all of us uploading every single piece of content because we wanted some exposure on the Internet or for whatever reason.
And I think there's pressure on both sides of this. I think we now have to start thinking more carefully about what we are creating and what we’re putting on the Internet. I think the folks in the middle, the ones developing the technologies have to think very carefully about what are they developing, should they be developing, how do they put the safeguards in place, because I think very much Silicon Valley continues this model of move fast and break things, and I think that is a really – was always an idiotic – this is my second anti-Facebook thing. Really stupid motto, move fast and break things. We should move slowly, innovate, and not break things. There’s an idea.
And then on the output side, and this part is really interesting to me because we have been talking about generative AI but we have missed one part of that, which is that if I had the ability to create nonconsensual sexual imagery or disinformation and all I can do it – and all I can do with it is send it to my five friends, all right, who cares? But I can do more than that. I have democratized access to publishing. I can put it on Twitter and YouTube and Facebook and Twitter and TikTok.
And so, in many ways, the generative AI threat is here because social media collected all this data which is feeding the AI. And it's here also because now once I create this content, I can get it out on the Internet again. So, it's this virtuous – is that the right word? Maybe not virtuous loop between the two. And I think that's a really interesting landscape that we shouldn't forget that there’s the traditional social media that's still a big part of our problem today.
>> HERB STAPLETON: So Chris, when we think about things on the output side, I'm really curious to hear your insight on how you think that affects you as an artist now and how it might affect you in the future. I think we talked earlier a little bit about things that are done in the style of Christ Stapleton.
>> CHRIS STAPLETON: Sure.
>> HERB STAPLETON: So, tell us a little bit about does that concern you now? And how do you kind of deal with that from an artist perspective? And then, as a follow on, if you could, could you tell us a little bit about where does it reach a point where you start to become very uncomfortable, if you are not there already?
>> CHRIS STAPLETON: I'm very uncomfortable right now. The more we talk about it, the more anxiety I have.
>> HANY FARID: Sorry, Chris.
>> CHRIS STAPLETON: That's okay.
All kidding aside even though I maybe wasn't kidding all the way, yeah – sorry. There are a lot of parts to that question.
>> HERB STAPLETON: Yeah, so the first thing is just like, how do you – when you see things that are made in the style of Christ Stapleton, like, how does that kind of affect you? Not at all? And how do you respond to it?
>> CHRIS STAPLETON: I haven’t seen that many things made that way but if I saw something like that, I would listen to it probably and see how good it was. And if I thought it was good, I would be like, well, hey, that's pretty good. And then I would like, well, maybe this guy has put me out of business. I don't know. I don't know how to feel about that. And there is a lot of things.
But if someone did do an effective version of me somehow, I would probably call them up and say hey man, how long did it take you to do this because it takes me a long time.
Let's make a – turns into Monty Hall, let's make a deal or something.
>> HERB STAPLETON: Well, but they’ve already – so, but they’ve already used a lot of your work, right? I mean, that’s how they did it in the first place. They just – they’ve used a technological tool to kind of build on that. At least like when we talk about from the input side, they have taken all the input that you have put in and then they've sort of maximized the output in a way that's legal – maybe. Maybe legal. Maybe not. To be determined.
>> CHRIS STAPLETON: You know, I don't know that I can do a whole lot about that. But I do – if there is any comfort, I don't know that comfort is the right word, just like with the streaming thing, when we go back to the early days of that, I feel like the companies that I have partnerships with, the bigger companies, the Universal’s and Sony’s of the world, when they start losing a bunch of money because of stuff, they figure out how to make those things be monetized and equilibrium as good as they can, and that stuff kind of trickles down to guys like me.
So, I have to have a little bit of faith, if there is such a thing as having faith in the music business, that we’ll arrive at that eventually. Even though it may be the Wild West and chaotic for a moment.
>> HERB STAPLETON: And then of course I think one thing that eventually you and others who create content are going to have to be on the lookout for who have a specific brand, that they, you know, promote to the world, is things that are not just inconsistent with that brand but cause harm or damage. People who are saying things that are offensive, using, you know, sort of your own likeness or someone else’s likeness. I mean, those are obviously things that you are going to have to monitor and would actually probably look to shut down.
>> CHRIS STAPLETON: We monitor those things already. There are all kinds of fake Facebook accounts that pop up and try to get money from people and things like that.
So, it’s just another thing to monitor and we will have to spend resources on that and then hope that, you know, our PR people are good enough and that we – that I'm behaved well enough on my own that it is not actually me. So, you know, I don't know how else to deal with that. I don't think there is any other way to deal with that.
>> HERB STAPLETON: Hany, we talked a little bit about having something that human – that the AI can't replicate. But I'm interested to know, obviously there is potential harm out there that can be done. And so, should we be putting things in place that essentially slow the technology down, to Katherine's earlier point about the speed at which we're moving is sort of exponentially faster than what we faced when the Internet was a new thing?
>> HANY FARID: Yeah. I mean, I would like for the industry to self-regulate but I am also realistic that it probably won't. I would like for our legislators to get their act together and do something but I'm also realistic and I don’t think they’re – there is no sign that they're catching up with the technology.
So, I think the pressure has to come from somewhere. I think it is going to come from the copyright side. I think Chris is right that the big studios with deep pockets are going to sort of start have a forcing function. There are some things that we can do from the technology side. I will name one of them that I am particularly keen on, although it’s not perfect. And I recognize that we have tried this before but I think we’ve learned a lot in the last twenty years which is I think every single piece of synthetic content, whether it’s text, images, video, or audio should have robust imperceptible watermarks in them. They can be watermarked so that downstream they can be detected by a phone, a computer, or whatever it is.
I think you can also fingerprint every single piece of content, so it’s slightly different.
So, the way the watermark works is you embed into the piece of content and then the content goes out in the wild and you hope that the watermark holds the way we do with currency.
You can also fingerprint. So, when open AI generates an image, they can extract a what's called perceptual hash or fingerprint from that and they can store that so that when the image starts circulating, we can query back to the database to say is this something that you’ve created.
The flipside of that is we can also start to authenticate real recordings. So, on this device, when I take a recording of an image or an audio or video, I – this is the thing that knows, the piece of hardware that knows that this is a recording of police violence, human rights violation, a politician saying something offensive, and you can fingerprint and cryptographically sign that and hold onto that as proof. And so, if we can prove what’s real is real, if we can prove what’s fully synthetic is synthetic.
There is this middle ground, right, all of this stuff in the middle, but that at least sort of anchors us and I think we have the technology to do that we just need sort of the right incentives to actually get the companies.
>> KATHERINE FORREST: Herb, can I just add to that, which is I agree with you on those tools on the output end but it doesn't solve the very important input issue, which is part of this whole debate with generative AI is that it’s generating new stuff that actually isn't necessarily a replication of a particular artist with a particular song. It's something new all-together.
So, the use of material that is protected, copyrighted material on the training side, that's a whole different sort of issue in terms of how do we actually protect that material so that it doesn't just become an input which is then never seen or heard from again. It's utility is sort of like the – it’s an orange and all of its juice is sort of squeezed out of it and it comes out as orange juice. It is no longer the orange. And you have got a real problem.
With particularly brand-new artists, brand-new people who want to – young people who want to actually make their own art, who haven't yet made enough money to have the resources to be able to be out there in the world monitoring all of the different accounts. All of the young musicians who want to get their start in life. And so, I think the input side is also a conundrum.
>> HANY FARID: I agree. And there is one thing, again, imperfect, that you can do which is you can add adversarial noise to images, audio, and video that disrupt the training. So, for example, people have started to do this with images of your face. You might want to put an image of yourself online but you don’t want clear view AI to scrape that and put it into their database. You add an imperceptible noise pattern that disrupts the machine learning.
The problem is it only disrupts it today and it is almost impossible to make that foolproof for next year's machine learning algorithms. So, I think there are things we can do in the short run; not clear that they work in the long run to protect the input.
>> HERB STAPLETON: Excellent. Well, as Katherine said at the top, this is the beginning of the beginning of the beginning of this issue. And I think this is the beginning of the beginning of the beginning of this type of conversation.
We have really covered a lot of ground. We have about six or seven minutes left. I just want to give each of you as a panelist an opportunity to just provide some closing remarks to the crowd here. So, Hany, we will start with you and work back to Chris.
>> HANY FARID: Chris said something interesting to me a minute ago. He said, you know, somebody says something offensive and tries to use my likeness, you know, we have PR people for dealing with that. And then you added, assuming I'm behaving, right? Which, I’m not saying you’re not.
But there’s this interesting world we’re entering and it’s what’s called the liar’s dividend, which is that when we enter this world, when any image, audio, or video of anybody saying or doing anything can be fake, then nothing has to be real anymore. We can deny reality.
To give you a sense of how fast the landscape has shifted, 2015, Access Hollywood tape comes out of then candidate Trump saying some pretty bad things about women and he apologized. Today if that tape came out, there’s no apology. “It’s not me; it’s fake.”
So, how do we reason about the world when it’s not just that things are fake, it’s that they can be fake. How do we reason about a very complicated and fast moving world? And I think that's what keeps me up at night more than – sorry about that, Chris – that's what keeps me up more than the actual fake stuff.
>> KATHERINE FORREST: And I think those are all such important points. And what I would add is I think that we are at an incredibly exciting moment in this new technological world with what AI is capable of. We're also at a moment when there are lots of unknowns. There are lots of uncertainties. There are lots of dangers that are both ethical, that are security related, and what we want to do is find a way to embrace the benefits of this new technology, but also to make sure that we really continue to recognize that our U.S. Constitution tried to strike a balance between giving creators certain kinds of exclusive rights, and in exchange, giving them some amount of compensation for it.
So, what we want to do is find a way with this new technology not to lose some of the essential values that we have in this country. And that our part of what makes our country be able to appreciate creative human genius just like that of Chris Stapleton.
>> HERB STAPLETON: Chris?
>> CHRIS STAPLETON: I have to end? Is that what –
>> HERB STAPLETON: I have got some things to say after you.
>> CHRIS STAPLETON: Everybody here is so well spoken but –
>> HERB STAPLETON: I have some things to say after you.
>> CHRIS STAPLETON: Well, I think this is important. As we're talking, I always remembered that the people who are creating these things are creators too. And I appreciate that about the creative process of anybody. But I think it’s – the thing that I have gotten from this the most is that it’s important this we all have these kinds of discussions. If we're having these kinds of discussions as, you know, people developing technology and legislators and law enforcement, then we can arrive at this thing that really is beneficial to everybody and helps everybody do their jobs, and we can move on without, you know, giant speedbumps and potholes popping up. But I, you know, I hope that maybe this gets some of that turning. You know, we'll see.
>> HERB STAPLETON: So, from my perspective, I think first of all, I just want to say thanks to this extraordinary panel. This has been a privilege for me to just be a part of this, to share it with my brother but also to share it with Katherine and with Hany. It has been an extraordinary experience and I have learned a ton, during this process, of things I didn't know.
From the FBI's perspective, we are not going to – we are not going to see a decrease in the number of national security and criminal threats that we see that incorporate these types of technologies. We are only going to see that continue to grow.
And one thing, I don’t know exactly how we’ll confront that. We have a lot of people who are working on that very hard. One thing I know for sure is that we cannot confront it without partnership, without the people like those up here on the stage, the people who truly know this area better than anyone else.
So, I am really grateful for your participation in this panel but also for your partnership.
As we think about the future, it is going to take not only partnerships here domestically but also internationally. As these things start to become more and more of an issue. And when they do rise to the level where law enforcement or the government needs to intervene, we also know that they will not be issues that will be confined by international borders or at least not geographic borders. And so, we’re going to have to look at this problem holistically within this country but also in the context of international partnerships.
And so, on the front of international partnerships and the FBI, as I wrap up this particular session, I really want to urge you to catch some of the other panels that we have going on this week with the FBI. The FBI's partnerships are, you'll see, heavily focused – the FBI's panels, you’ll see, are heavily focused on partnership. And coming up this afternoon, we have a panel focused on the international partnerships that we value to support Ukraine. I would really encourage all of you, if you have a free moment, to pop over and see that particular panel discussion at 2:25 PM today over in Moscone West in 3001. I hope you will join our panelists for that discussion. It might not be as interesting as this one but I think it will be really close.
So, I will just close out by thanking each and every one of you for being here today. By saying thank you to Hany, to Katherine, and to Chris for lending their expertise to this beginning of the beginning of the beginning of the conversation, and I hope you guys have a great rest of your time here at RSA. Thank you very much.
Participants
Katherine Forrest
Panelist
Chair Digital Practices Group, Paul, Weiss, Rifkind, Wharton & Garrison LLP
Share With Your Community