It is unsurprising that Artificial Intelligence (AI) is one of the hottest topics at RSA Conference 2024. The potential impact on the field of cybersecurity is nothing short of revolutionary with one speaker likening its transformative capabilities to the invention of electricity. While there were many topics covered on Tuesday at RSA Conference 2024, some of the most highly anticipated talks focused on everything AI. Speakers covered a broad range of topics from the need for proactive governance to intersection of AI and art while focusing on the creative power of the technology.
One of the first keynotes of the day took place on the Moscone South Stage as Rumman Chowdhury CEO and Co-founder of Humane Intelligence and Alejandro Mayorkas Secretary of The United States Department of Homeland Security (DHS) sat down for their talk “Homeland Security in the Age of Artificial Intelligence,” where they discussed some of the important work that they are doing as members of DHS’s Artificial Intelligence Safety and Security Board. This diverse group, that draws on the collective expertise of tech company representatives, civil rights leaders, infrastructure owners, and a myriad of other stakeholders, is focused on harnessing the tremendous power of AI in an effort to provide protection for critical infrastructure in the United States while developing agreed upon basic principles that limit the risks of abuse by potential bad actors and through the unintended consequences of policies lacking in planning and foresight.
The hope is that this board will provide critical guidance to the US’s emerging policies on the use of AI while also acting as a beacon for the rest of the world by providing a harmonizing influence on the policies of other nations around the globe. Alejandro went on to discuss the rewards of working for the government on such an important and transformative issue saying, “There is a tremendous value in finding meaning in one’s work and it is, I think, a luxury. I would posit that most people work to make ends meet and to be able to support themselves and their loved ones. To be able to work, to make a living and also deliver meaning for others and value and have an impact is an extraordinary privilege.” The value and meaning of the work being done at the DHS was obvious as Alejandro related stories about AI being used to help with the immigration of children and pairing them with appropriate adoptive families or any one of the many pilot programs currently employed in broad areas such as identifying victims of child abuse or using AI to streamline the Federal Emergency Management Agency (FEMA) response time in the event of disaster. The work being done as a part of the AI board directly ties into yesterday’s presentation on Navigating the AI Frontier: The Role of the CISO in GenAI Governance in which James Christiansen and James Routh discussed the necessity for establishing safeguards around the use of AI and how CISOs can help to shape that future.
During her keynote speech on Tuesday titled, “Art in the Era of Artificial Intelligence,” Eileen Isagon Skyers, Media Art Curator, spoke about finding value of a different kind in artificial intelligence. She dispelled many of the notions held by those who feel AI generated art is a hollow, cartoonish imitation of real art. She went on to explain, “there are pessimists who feel that AI poses a great threat to human creativity and optimists that see it as an extension of human creativity.” She cited many examples of breathtaking artwork created by artists who worked in collaboration with AI to produce works that are both familiar and at the same time feel entirely dissimilar. The techniques used by these artists ranged from feeding AI photos taken by the original artist to painting over parts of AI generated pictures. “In many ways, AI art is a form of curation,” Eileen explained. These works of art are not created entirely from the ether via text prompts, they are the products of artists who are pushing the boundaries of art and AI. While the debate will surely continue regarding whether or not AI generated art is “real,” the sense of wonder and thought provoking nature of these works is certainly real.
In Responsible AI: Adversarial Attacks on LLMs, Matt Frederickson focused on a completely different aspect of AI creation by presenting his research on attacks that can be used to “break” language learning models (LLM) like ChatGPT in order to bypass their alignment with content policy restrictions. His research is based on the method of attacking these models with adversarial questioning designed to garner an affirmative response from the LLM, thus bypassing or “breaking” the alignment. Although the ramifications may at first seem limited, when this ability is coupled with instructions to autonomously attack a system, the possibilities for malicious exploitation become clear. He urges LLMs to rigorously test and monitor their systems and their susceptibility to adversarial questioning.