In the aftermath of the New Hampshire primary robocall that had cloned the voice of President Joe Biden and advised voters not to bother showing up to the polls, the Federal Communications Commission (FCC) has outlawed AI-generated robocalls to combat the threat of deep fakes that could disrupt the integrity of the 2024 elections.
These concerns are not new, but generative-AI tools in the hands of malicious actors has the potential to produce “rampant disinformation, deepfakes, and the harassment of election officials,” according to a CNN Politics exclusive. The story reports on unnamed senior national security officials who met in the White House Situation Room in December to test simulated scenarios “of any federal response to election-related chaos.”
A post in The Conversation penned by Tom Felle, Associate Professor of Journalism at the University of Galway, opined, “There is a real danger that unless we act now to protect the public these issues will only be exacerbated by the threats posed by AI, Russian disinformation campaigns, and the invasive use of technology to target voters in the coming months.”
Taking a look into the minds of election officials heading into 2024, CyberScoop’s Derek B. Johnson wrote, “Emerging technologies that might supercharge disinformation, a lack of resources, interference by foreign governments and widespread hostility from voters still suspicious of the electoral process are just some of the challenges they are expected to face.” To that end, the Cybersecurity and Infrastructure Security Agency (CISA) this week launched its new #Protect2024 webpage, rich with resources for state and local election officials.
Given that nearly half the world will go to the polls this year, election security is a paramount concern the world over, and tech giants are stepping in to help. Annette Krober-Riel, Google’s VP of Government Affairs and Public Policy for Europe said, “We are supporting the European Parliamentary Elections by surfacing high-quality information to voters, safeguarding our platforms from abuse and equipping campaigns with the best-in-class security tools and training.”
To learn more about how local, state, tribal, and territorial government agencies can defend against cyberthreats, visit our library. Now let’s look at what else made industry headlines this week.
Feb. 9: Security Week reported, “A congressional investigation finds that US venture capital firms invested billions in Chinese technology companies in semiconductor, AI and cybersecurity, sectors that are a threat to national security.”
Feb. 9: Kenya reportedly saw a massive spike in cyberthreats, which officials think could be a function of recent enhancements to the country’s threat monitoring capabilities.
Feb. 8: Infosecurity Magazine reported, “The personal information of 33 million French citizens could be exposed after two French health insurance operators suffered a data breach in early February.”
Feb. 7: CISA warned critical infrastructure organizations that Chinese state-sponsored actors have compromised some major US critical infrastructure.
Feb. 7: Wired’s Andy Greenberg reported, “On Wednesday, cryptocurrency-tracing firm Chainalysis published new numbers from its annual crime report showing that ransomware payments exceeded $1.1 billion in 2023, based on its tracking of those payments across blockchains.”
Feb. 6: Google’s Threat Analysis Group published a report detailing the ways in which spyware technology is being used to exploit vulnerabilities in consumer devices.
Feb. 5: Troy Hunt broke down how a leaky API resulted in “a deluge of personal data” at Spoutible.
Feb. 5: Bleeping Computer reported, “Secretary of State Antony J. Blinken announced today a new visa restriction policy that will enable the Department of State to ban those linked to commercial spyware from entering the United States.”