Social Engineering in the AI Era: Empower your Digital Defense


Posted on by Venkat Viswanathan

A groundbreaking innovation reshapes our world every few years and leaves a lasting mark on future generations. From the dawn of the web to the rise of mobile and cloud computing, each advancement has opened new possibilities that can enrich our lives. Today, we stand at the cusp of another paradigm shift driven by artificial intelligence, shaping a new era of technological transformation. These innovations are like a double-edged sword, creating both an opportunity and a liability for businesses and a new layer of cybersecurity risks and challenges. Organizations and individuals worldwide can now harness AI's transformative potential because of rapid advances in computing power, the abundance of data, and ongoing innovations in LLMs. At the same time, breakthroughs in AI are driving an arms race between cybersecurity and social engineering scammers.

AI Supercharges Social Engineering Attacks

Scalability and Automation

The Cybersecurity and Infrastructure Agency (CISA) states that 90% of successful cybersecurity attacks begin with phishing, the most common social engineering technique. Phishing uses messages and fraudulent websites to pose as a trusted party and deceive victims to provide sensitive information. AI-driven tools can generate many phishing emails, messages, and calls at a pace far beyond human efforts. As a result, phishing campaigns that once were labor intensive, limiting the reach, have now allowed bad actors to target more people with minimal effort and multiply the chance of success. AI-powered chatbots can operate 24/7, interacting with several people simultaneously, which significantly increases the efficiency of these targeted campaigns.

Hyper-Personalization

Advances in generative AI have achieved sophistication, making social engineering attacks more convincing and challenging to detect. AI tools can search through various data sources, such as social media, public records, and leaked credentials, to craft highly personalized messages. These tools have become exceptionally good at generating human-like text, making attackers seem more authentic and trustworthy. According to a recent IBM Identity Fraud Report,, there has been a 3000% increase in deepfakes by leveraging AI tools to create realistic audio and videos of real individuals to impersonate executives, colleagues, or public figures to manipulate targets. These can trick victims into revealing information, transferring money, or granting access to systems.

Lower Barrier to Entry

Recently, the FBI published a press release urging individuals and businesses to remain vigilant against the evolving threat landscape of AI-powered cybercrime. Rapid innovation has made sophisticated generative AI tools available and easier to use, allowing the execution of advanced social engineering attacks and significantly increasing the number of potential threats. 

Protecting your Identity in the Digital World

Cybercriminals use AI to augment and enhance their schemes, increasing cyberattack speed, scale, and automation. Identity-driven security plays a crucial role in mitigating social engineering attacks by focusing on verifying and validating the identity of users and entities involved in digital interactions.

Here are a few recommendations that can help organizations and individuals to take control of their digital security:

Phishing Resistant Authentication and Verification Mechanisms

While any multi-factor authentication adds a layer of defense compared to just passwords, organizations must adopt phishing-resistant authentication factors such as passkeys and FIDO/WebAuthn authenticators. These authenticators leverage cryptographic verification to establish a secure link between the end user and the legitimate websites. A phishing-resistant authenticator prevents attackers from impersonating an actual website by creating a deepfake clone to trick the users and steal their credentials. On January 16, 2025, The White House published an executive order on strengthening and promoting innovation in the Nation’s Cybersecurity. The presidential order explicitly calls for prioritizing investments to roll out phishing-resistant authentication for all Federal Civilian Executive Branch Agencies (FCEB) systems and users.

In addition to phishing-resistant authenticators, the identity verification process helps verify that the person enrolling for access is who they claim to be. Traditional methods often rely on passwords, security questions, or email verification that can easily be compromised through social engineering attacks. ID verification process uses fingerprint scanning, facial recognition, liveness tracker, etc., which are unique to the individual and much harder to fake, resulting in enhanced security and user experience.

Robust Fraud Prevention Controls

A comprehensive approach to combating social engineering requires a multi-layered strategy involving various security technology controls. Email and web filters help block spam and phishing attempts before they reach your inbox and prevent access to malicious websites. Endpoint security tools such as anti-virus and anti-malware avoid the execution of malicious payloads and detect suspicious activity on individual devices. The ability to leverage data is a key differentiator in today's complex and evolving social engineering threat landscape. It offers vital insights that aid in identifying, analyzing, and responding to these security threats. Organizations can detect threats rapidly by analyzing user behavior and identifying anomalies that may indicate an attack.

Education and Awareness

A single lapse in judgment or moment of carelessness can render any technological control ineffective. Bad actors design social engineering attacks to exploit human psychology and behavior rather than technical vulnerabilities. Therefore, it is essential to empower individuals through education and training to identify the common signs of phishing emails, suspicious phone calls, and other social engineering attempts. Education fosters a culture of healthy skepticism, encouraging users to question requests for information and verify their authenticity before acting ("Trust but Verify").

As AI technology evolves and becomes more accessible, social engineering attacks will become increasingly sophisticated, scalable, and more complex to detect. However, organizations and individuals can defend against these emerging threats through robust technology controls and ongoing security awareness education.

Contributors
Venkat Viswanathan

Group Product Manager, Okta

Machine Learning & Artificial Intelligence

social engineering Artificial Intelligence / Machine Learning phishing Fraud Protection Technologies fraud identity management & governance authentication

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs