While many of Tuesday’s talks centered on the transformative potential of artificial intelligence (AI) in areas such as revolutionizing the speed at which our government works and the intersection of AI and art, there were also conversations that focused on the disruptive and destructive capabilities of generative artificial intelligence (GenAI) and the global ramifications of the AI arms race. In Tackling Deepfakes, Wars, and Other Security Threats in the GenAI World, participants Adam Isles, Natalie Pierce, Lucy Thomson, and moderator Patrick Huston discussed the darker societal impacts that have already been felt as the result of GenAI and how they see these concerns evolving in the future of the United States’ legal system, global digital identity authentication, election interference, and autonomous warfare.
Lucy Thomson presented the key, seismic problem of GenAI on the legal profession and how it affects judges, jurors, and the concept of “evidence.” With the proliferation of deepfake technology that allows for the creation of hyper-realistic, fictitious depictions of voices and videos, and the growing ease with which they can be created with AI, Lucy discussed the problem known as The Liar’s Dividend. This term is used to describe the ability to claim that any video evidence that potentially shows guilt is falsely generated by AI coupled with the idea that guilty parties can actually create their own exculpatory evidence using deepfake technology. This will eventually lead to a dilemma where no one can actually believe what they are seeing whether it proves innocence or guilt. In addition, research has shown that some evidence can be so prejudicial that it can negatively impact jurors, even if they learn that it isn’t real. This leaves judges with a huge problem in that evidence laws are federal and the current threshold is based upon accuracy, not reliability. While there is a push to develop new regulations regarding the use of deepfakes in the court system, there is currently no solution or clear path forward.
Adam Isles’s major concern regarding GenAI centers on digital identity subversion. With the massive shift towards a digital mobile economy affecting payments, hiring, and contracts, bad actors are recognizing the tremendous potential for abuse and exploiting digital identity services for malicious purposes. Authentication and identity are currently in a vulnerable state where they are facing attacks from malicious actors exploiting AI to generate false identities, attacks on the AI systems that these authenticating bodies themselves use, and the exploitation of loopholes resulting from unintended consequences of short-sighted policies. In order to combat these attacks, organizations are strengthening their processes regarding the three vital aspects of identity authentication which are enrollment and identity, validation, and verification. Enrollment and Identity focus on establishing a unique identity, validation is concerned with the supporting artifacts of that identity and verification determines that all of this evidence actually belongs to the individual in question. The current system is frequently being abused by bad actors from countries such as North Korea as a means of obtaining employment in US tech companies under false pretenses.
Natalie Pierce expressed serious fears regarding GenAI and its role in election interference. She cited the current example of deepfakes being used to have a significant impact on elections in India, explaining that “disinformation and misinformation campaigns are rampant.” Political dissidents are using deepfake videos of Bollywood actors and government officials to deceive the voting public and undermine the democratic process and it is expected that there will be similar levels of interference in the upcoming US Presidential election. In order to mitigate the potential damage from deepfakes, Natalie urged that there needs to be a conscious effort to educate the public on the capabilities of GenAI so that they can make informed choices about how they consume media regarding the candidates.
Patrick Huston warned the crowd that GenAI is making “killer robots” that can act autonomously and carry out orders without fear or conscience. Although this could revolutionize warfare, in the wrong hands, the potential for abuse of this technology is limitless. As a result, the development of autonomous weaponry needs to be closely monitored and substantial safeguards need to be put in place regarding their development.
The picture painted by the panel regarding the future impact of GenAI seems bleak at first blush. However, for each of the scenarios, GenAI also presents a measure of hope such as the ability to identify falsified evidence or detect fabricated identities. GenAI is also being effectively used, with some limitations, as an effective means of combating malware, as Vicente Diaz explained yesterday during his presentation on How AI is Changing the Malware Landscape. AI is truly a double edged sword responsible for some of the most promising developments and frightening possibilities in cybersecurity.