Joy is a wonderfully contagious emotion, and it’s a feeling that erupted in me last week when I had the opportunity to meet so many members of the cybersecurity community in person at RSA Conference 2023. My cup continued to fill this week seeing my LinkedIn network post about the positive experiences they had. I’m not on Facebook or Twitter, so my social media is limited to the protective perimeter of positivity fed to me by the LI algorithms. Yes, this nurtures my joy, but does it also limit my understanding of the world around me? Is it a form of misinformation?
“Disinformation fills broadcast channels and social media channels with false information to overload a system, so you can’t discern fact from fiction. Truth from a non-truth,” said Ted Schlein, Chairman and General Partner, Ballistic Ventures/Kleiner Perkins when moderating last week’s panel Misinformation Is the New Malware.
In large part, the wide reach of misinformation campaigns are made possible through the use of AI technologies. As University of California at Berkley Professor Hany Farid pointed out to attendees of the session, “Face the Music: Cybersecurity and the Music Industry.” Farid noted that one’s ability to create false information, images, or content has little impact if it can only be shared with a handful of friends. His larger point was that, “traditional social media is still a big part of our problem today,” and human beings continue to feed these platforms with data.
Headlines this week echoed the challenges of misinformation and disinformation. A news story in The Atlantic warned that AI Is About to Make Social Media (Much) More Toxic. CyberScoop reported that US top officials fear AI could exacerbate the spread of disinformation.
At EmTech Digital this week, former Google AI Researcher, Geoffrey Hinton, who’s widely referred to as the “godfather of AI,” explained why he resigned from his role at Google. Hinton said that he’s changed his mind, “about the relationship between the brain and the kind of digital intelligence we’re developing.”
Hinton is not alone in his concerns. We are approaching the “height of hysteria” in the tech panic cycle described by the Center for Data Innovation. I have to imagine that throughout history human beings have found themselves in this position before. Innovation designed with the best of intentions will inevitably have unintended consequences, but according to my LinkedIn network, the sky isn’t falling, yet.
So let’s take a look at what else happened across the cybersecurity industry this week.
May 5: Cybersecurity Dive reported, “A federal district court judge sentenced Joseph Sullivan, former chief security officer at Uber, to three years probation after he was convicted of concealing a 2016 ransomware attack against the company while the ride sharing firm was under investigation by the Federal Trade Commission.”
May 5: Google announced a new certificate program to provide threat detection skill development to prepare participants with no cybersecurity experience for entry-level positions.
May 4: The New York Times reported, “The White House on Thursday pushed Silicon Valley chief executives to limit the risks of artificial intelligence, in the administration’s most visible effort to confront rising questions and calls to regulate the rapidly advancing technology.”
May 4: Newly proposed bipartisan legislation seeks to provide more cybersecurity resources to commercial satellite owners and operators.
May 3: “Google and Apple have announced jointly submitting a proposed industry specification to aid the fight against unwanted tracking via Bluetooth location-tracking devices,” Infosecurity Magazine reported.
May 2: Cyberattacks continue to plague municipalities along the east coast from Massachusetts to South Carolina.
May 1: The Hacker News reported, “An analysis of over 70 billion DNS records has led to the discovery of a new sophisticated malware toolkit dubbed Decoy Dog targeting enterprise networks.”