Cybersecurity and the Ethical Impact of Disinformation


Posted on by Greg McDonough

Deliberately spreading lies, exaggerating truths, omitting facts, and separating statements or actions from context as a means of crafting a narrative intended to manipulate a targeted audience is hardly a new concept. However, with the deep entrenchment of social media in the lives of many and the proliferation of alternate news sources, disinformation campaigns have become troublingly powerful and effective. These manipulation attempts often provide a convenient vehicle for serious cybersecurity attacks such as phishing and malware. Although these issues are serious in and of themselves, the bigger problem is that disinformation has become so pervasive that it has begun to erode the underpinnings of trust in our society. Some governments and tech companies have been taking steps to curb the spread of disinformation, but the way forward is complicated.

While cybersecurity companies have always been on high alert in the fight against malware, disinformation provides yet another field of engagement for bad actors looking to gain access to victims’ sensitive information. Through artificial intelligence (AI), attackers are able to create specific, targeted phishing campaigns that entice users to follow malicious links where they may unwittingly enter personally identifiable information (PII) such as birthdates and social security numbers. Once obtained, bad actors use this information to acquire credit products, drain bank accounts, and take advantage of their targets in myriad other ways. False news posts and e-mails are also often used as bait to lure victims to malicious websites that install malware giving access to devices and even the networks used to access the Internet.

Disinformation may provide serious concerns from a cybersecurity standpoint, but its ethical implications are far more insidious. In the not-too-distant past, people were reliant upon commonly accepted news sources such as broadcast networks. Although bias was certainly a concern, these outlets made a good faith effort to report facts and verify sources. Today, people are bombarded with information from a wide variety of outlets - many of which knowingly spread false reports or make little to no effort to ensure that claims are substantiated. People are constantly exposed to so much disinformation that they no longer feel that they can trust anything. The end result is that fewer than 40% of people “trust the news most of the time.” This paints a very bleak picture of a society in which no one knows who to turn to for information that can be relied upon.

Beyond the Erosion of Trust

In addition to eroding trust, disinformation is also frequently weaponized to sway public opinion in a favor of a certain agenda, influence elections, and even incite violence. Generative AI has made the development of deepfakes, which are convincingly created or altered images, video or audio, far simpler and more widespread. These deepfakes can be used to lend credence to false claims and help to deceive voters and influence public opinion. The mere possibility of deepfakes also allows for “misinformation about misinformation,” which is also known as the “liar’s dividend.” This tactic creates scenarios where damning evidence can be written off as wholesale fiction, further muddying the public’s understanding of what is real and what is deception. Although there is important work being done to ensure that elections remain fair and democratic, some of these disinformation tactics are so prejudicial that even when they are disproven, their influence lingers. In other instances, governments are intentionally deceiving their populations as a basis for waging war.

Disinformation has also proven to be an effective means of influencing social beliefs and economic choices. Rampant disinformation and misinformation have had a serious impact on public perception of issues such as vaccines, their effectiveness, and their side effects. This may make the spread of future diseases and viruses more difficult to contain and is also a contributing factor to the re-emergence of many infectious diseases once thought to be under control. Disinformation can also have a profound economic impact which can be as simple and focused as prejudicial information designed to shake faith in a single product or company or as complex and far-reaching as deliberately undermining a nation’s economy and weakening their currency. 

There is no simple solution for countering disinformation. The various ways in which it can manifest itself make even identifying it an onerous task. While instances such as deepfakes are clearly designed to deceive and manipulate, omitting certain survey results or showing re-aligned graphs is not as clear cut an issue. While governments struggle to regulate the problem, it is necessary for technology companies to step up and play a more proactive role in combating disinformation. 

A Path Forward

One step that companies can take is through providing more transparent communication regarding algorithms used to funnel certain information towards users. These algorithms often serve to create an echo chamber where disinformation is routed to the individuals most susceptible to it. Social media companies also need to take a more active role in fact checking posts and taking action against accounts known to spread disinformation as well as recognizing disinformation campaigns in their earlier stages and taking steps to neutralize them.

Ultimately, becoming a digitally literate and educated consumer of information is the best way to stay safe from disinformation. It is important to look to a variety of trusted sources that present information from different viewpoints. Think critically about information that is shared on social media and verify claims with fact checking websites. One of the best ways to stay educated is by staying up to date with the strategies and techniques used in disinformation campaigns and other cybersecurity risks by accessing the wealth of information from industry leading experts in RSA Conference’s extensive library.

Contributors
Greg McDonough

Cybersecurity Writer, Freelance

Human Element Hackers & Threats

disinformation campaigns/fake news ethics phishing malware Artificial Intelligence / Machine Learning

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs