Countering AI-Augmented Social Engineering Part 1


Posted on by Isla Sibanda

Imagine receiving an email from your CEO, their voice confirming its urgency in an attached message. The request seems reasonable, the tone unmistakable. 

Yet, it’s a complete fabrication—a deepfake designed to exploit your trust and steal critical information. This is the chilling reality of AI-augmented social engineering: a battleground where advanced algorithms blur truth and deception. If it almost happened to a Ferrari executive, why would you be immune?

We must face reality, with attackers leveraging AI to amplify precision and scale, defending against these threats requires not just awareness but a dynamic, multi-layered strategy that anticipates and counters their every move. Let’s take a deeper dive into this conundrum in part one of this blog.

The Evolution of Social Engineering with AI

Traditional social engineering relies heavily on psychological manipulation—phishing emails, pretexting, and baiting, among others. However, the integration of AI has supercharged these tactics. 

According to news from Harvard Business Review, AI systems can generate highly convincing phishing messages tailored to individual targets by analyzing vast troves of personal data harvested from public and private sources. These AI-driven attacks are not only more scalable but also more difficult to detect due to their sophistication.

Consider deepfake technology. Once limited to rudimentary voice or video alterations, today’s AI-powered deepfakes can now create hyper-realistic simulations of people’s voices and appearances.

Attackers can impersonate executives to authorize fraudulent transactions or coerce employees into divulging sensitive information. The potential for damage is staggering, as the trust that individuals place in their visual and auditory senses is weaponized against them.

Identifying the Threat Landscape

Perhaps most alarmingly, AI models can parse and analyze social media profiles, emails, and even voice recordings to craft messages that feel personal and legitimate.

These new-age phishing campaigns no longer rely on generic messages but rather exploit specific details about an individual’s job, habits, or recent activities to increase the likelihood of success.

Using that same data, with a few images, voice notes, and videos sprinkled on top, deepfake technology enables attackers to reach new levels of slyness. 

Individuals still don’t have to worry as much, due to the difficulty, time required, and expense of quality deepfake software. As Chris Taylor, Principal Consultant at Taksati Consulting said in his RSAC webcast, "these attacks involve emails that target high-level individuals within organizations but can be used against anyone, eroding trust in digital communication.

Similarly, synthetic identities—fabricated personas created using AI—can bypass traditional identity verification systems, enabling fraud and infiltration.

AI-Augmented Social Bots

If you think 66% of organizations having suffered a successful attack in 2024 is alarming, wait until autonomous social bots appear. Masquerading as humans, they can infiltrate organizations’ internal communication channels or social networks, gradually gaining trust before exploiting it. 

These bots can engage in prolonged interactions, gathering intelligence or spreading disinformation. The worst part? They’ll be able to do it without external control, instructions, or oversight. 

Attackers now use AI to identify high-value targets and craft ransomware strategies that incorporate social engineering. But once social bots become more realistic and autonomous, any hacker will be able to sit back and let bots run everything.

From ransom notes, language markers, and reminders, these AI agents will be essential in protecting systems, too. What hackers use for malicious purposes, we must use to social engineer and overcome their efforts.

In this first part, we’ve explored the evolving landscape of AI-augmented social engineering, highlighting the increasing sophistication of these attacks and the potential for significant harm.

Stay tuned for part two, where we will delve into the crucial steps organizations and individuals can take to safeguard themselves against these emerging threats.

Contributors
Isla Sibanda

Freelance Writer,

Machine Learning & Artificial Intelligence

social engineering Artificial Intelligence / Machine Learning phishing fraud hackers & threats identity theft

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs