Ben's Book of the Month: FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions


Posted on by Ben Rothke

Significant advances in machine learning and artificial intelligence, detecting fake and bogus content is becoming quite difficult. Deepfakes, disinformation, and similar attacks have also improved dramatically, and it is becoming quite challenging to discern between what is real and what is fake.

The early attacks on phishing emails were simple to detect. The attackers made such blatant grammatical and spelling errors that it was hard not to see that they were fake.

In 2009, a new cryptanalytic attack on the Advanced Encryption Standard (AES) was found to be more effective than a brute force attack. Noted security expert Bruce Schneier observed that “attacks always get better; they never get worse.”

This week, the Office of the Director of National Intelligence (ODNI), Federal Bureau of Investigation (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a statement that Russian influence actors manufactured a recent video that falsely depicted individuals claiming to be from Haiti and voting illegally in multiple counties in Georgia.

These agencies based their findings and claims from information available and prior activities of other Russian influence actors, including videos and other disinformation activities. Russian influence actors also manufactured a video falsely accusing an individual associated with the Democratic presidential ticket of taking a bribe from a US entertainer.

This Russian activity is part of Moscow’s broader effort to raise unfounded questions about the integrity of the US election and stoke divisions among Americans. In the lead-up to election day and in the weeks and months after, the agencies expect Russia to create and release additional media content that seeks to undermine trust in the integrity of the election and divide Americans.

It’s evidently clear that these deepfakes are not just pushing fake cryptocurrency and other scams. They are actively trying to undermine American democracy. 

In 2019, I reviewed Transformational Security Awareness: What Neuroscientists, Storytellers, and Marketers Can Teach Us About Driving Secure Behavior by Perry Carpenter. He’s back with another excellent book in FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions (Wiley). Here, he has written an informative guide on dealing with the new era of disinformation we find ourselves in.

The truth is that the creation of fake news and content is far from new. Misinformation, disinformation, and other forms of media manipulation have existed since the beginning of communications. What has changed is that machine learning and artificial intelligence have improved our ability to do these things so much that they are completely altering the way we live.

While it used to take significant time and effort to create fake content and information, today, readily available tools can create content quickly, easily, and cheaply. Even experts are not completely sure what is real or fake or what humans or AI created.

There is still much hype around generative AI But beyond the hype, it is an influential tool. Carpenter has written an interesting and entertaining guide that provides the reader with a thorough understanding of generative AI and AI-generated media.

Generative AI and AI-generated media are particularly powerful, given that despite our intelligence and reason, we are still creatures easily manipulated by our automatic, emotionally driven cognition. This cognitive bias is significant, as evidenced in the cognitive bias codex lists nearly 200 different types of biases.

While there are nearly 200 different types of biases, the book lists many different types of AI-powered deceptions. These range from phishing and financial fraud to romance scams, online harassment, and many more. 

After spending seven chapters discussing the problems, risks, and threats, chapter eight discusses several defenses that can be used. One of the more compelling approaches is the SIFT Method.

SIFT is a series of actions you can take to determine the validity and reliability of claims and sources on the web. The SIFT method is quick and simple and can be applied to various kinds of online content: news articles, scholarly articles, social media posts, videos, images, etc. SIFT is an additional set of skills to build on checklist approaches to evaluating online content based on the CRAAP Test - (Currency, Relevance, Authority, Accuracy, Purpose).

While countless attackers use deepfake technologies to exploit people, the industry has responded with tools to defend against them. It’s a quickly expanding sector, with new tools and solutions coming out regularly. 

One of the more interesting tools I’ve seen is Wolfsbane AI, a deterrence and detection (DnD) system meant to protect digital content. It can detect AI-generated content and monitor client identities on social media for deepfakes. 

Deepfakes are here to stay, and the first step in defending against them is awareness of the problem. In FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions, Perry Carpenter has written an insightful guide that makes the reader eminently aware of the many risks of deepfakes and misinformation and encourages them to take active action against those threats.


Contributors
Ben Rothke

Senior Information Security Manager, Tapad

Machine Learning & Artificial Intelligence

disinformation campaigns/fake news Artificial Intelligence / Machine Learning phishing Encryption security awareness Security Awareness / Training

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs