AI's Biggest Cybersecurity Threat Isn't New Attack Types but Scaling Existing Ones


Posted on by Andrey Suzdaltsev

In 2025, artificial intelligence hasn't reinvented cybercrime—it has industrialized it. Rather than spawning completely new attack vectors, AI empowers threat actors to execute familiar cyberthreats faster, more precisely, and at unprecedented scale. This shift, subtle yet significant, has fundamentally reshaped the cybersecurity landscape.

Traditional cyberthreats like phishing, ransomware, social engineering, and Distributed Denial-of-Service (DDoS) have existed for decades, but AI has drastically elevated their potency. Cybercriminal groups now use machine learning not only to automate routine tasks but to continuously refine attacks, making detection and prevention increasingly challenging. Sherrod DeGrippo, Director of Threat Intelligence at Microsoft, highlights this reality: "AI is just another tool attackers use to operationalize their campaigns, allowing them to move faster and with greater precision." [1],[2]

AI-Powered Phishing at Scale

Phishing, a tactic traditionally relying on simple deception, has seen its effectiveness multiplied by AI. According to the US Treasury, financial institutions have observed a significant increase in the scale and sophistication of phishing and social engineering attacks enabled by AI, with threat actors leveraging generative AI to craft more convincing and targeted messages. Supporting this, cybersecurity firm SlashNext reported a 202% surge in phishing messages in the second half of 2024, underscoring how AI is fuelling a dramatic escalation in both the volume and realism of these attacks. [3],[4] Hacktivists like SiegedSec have used AI-crafted spear-phishing emails to convincingly mimic senior executives, successfully targeting organizations based on ideological motives.

The FBI warns that AI-generated phishing emails are now so sophisticated that they closely mimic human language and tone, often appearing more legitimate than actual corporate communications. [5] This eliminates many of the traditional linguistic red flags—such as awkward phrasing or grammatical errors—that organizations once relied on to spot malicious emails. As a result, organizations must now adopt advanced behavioral analytics tools to detect subtle irregularities in communication patterns.

AI and the Ransomware Economy

Ransomware operations have transformed with AI and ransomware-as-a-service, enabling outsourcing of tasks including negotiations. AI-powered bots automate negotiations 24/7, boosting efficiency, and pressuring victims. These bots personalize ransom notes, exploit psychological triggers, and predict payment likelihood. This scalable extortion mirrors legitimate business practices with automation and victim support. Defensively, companies use AI to analyze threats, simulate negotiations, and guide decisions. With ransom demands rising from thousands to millions, AI-driven negotiation is vital. This dynamic creates a high-stakes digital cat-and-mouse game between attackers and defenders. [6]

Social Engineering and Deepfake Realism

AI has profoundly transformed social engineering attacks, introducing highly realistic deepfakes capable of undermining trust at a societal level. According to Darktrace, social engineering attacks increased by 135% following the widespread availability of generative AI tools. Pro-Russian groups have effectively used deepfake videos to spread panic during critical events, demonstrating AI’s capability for mass psychological manipulation. [7],[8] To counteract these threats, organizations must adopt continuous employee training specifically designed to recognize synthetic media and verify critical communications rigorously.

DDoS Attacks: Adaptive and Precise

DDoS attacks have similarly evolved from simple overload tactics to highly adaptive, AI-driven operations. In recent campaigns like #OpIsrael, attackers employed AI-controlled botnets to dynamically reroute traffic, defeating traditional mitigation strategies, and disabling targeted services with surgical accuracy. Combatting such sophisticated threats requires defensive AI capable of real-time, adaptive mitigation to neutralize evolving attack patterns.

Democratization of Advanced Cybercrime

Perhaps AI's most troubling effect is democratizing cybercrime, reducing the barriers that once limited sophisticated attacks to highly skilled adversaries. Now, "AI-as-a-service" platforms are readily available on dark web marketplaces, empowering even novice hackers to deploy advanced attacks with minimal technical expertise. Amateur groups like GhostLulz exploit subscription-based AI malware tools to conduct attacks previously unimaginable for less-skilled actors.

The explosive growth of AI-enabled cybercrime marketplaces—featuring services like "DeepPhish Pro"—highlights this troubling democratization. Small activist groups can now launch impactful phishing or ransomware campaigns to further their ideological objectives, significantly expanding the threat landscape organizations must navigate. In response, proactive intelligence gathering and continuous threat monitoring become critical defensive priorities.

Convergence of Cybercriminal and Hacktivist Operations

AI also fuels convergence between financially motivated cybercriminals and ideologically driven hacktivists. Hybrid operations like ContiAI exemplify this trend, combining commercial ransomware franchising with ideological objectives, blurring traditional threat classifications. Such alliances complicate defensive strategies, necessitating comprehensive, behavior-focused threat detection technologies like runtime application self-protection (RASP).

Collective Resilience Through Human-AI Collaboration

Addressing these amplified threats requires a cybersecurity approach that blends advanced AI analytics with human insight. Organizations must shift from reactive defenses to proactive threat hunting, deploying technologies like federated machine learning to detect subtle anomalies before threats materialize. Zero Trust architectures, continuously adaptive identity verification, and sophisticated SOAR platforms should form the backbone of modern security strategies.

However, technology alone isn't enough. The effectiveness of cybersecurity operations increasingly hinges on human-AI collaboration, combining algorithmic detection capabilities with human judgment and contextual awareness. Organizations should cultivate integrated security teams trained to leverage AI tools effectively, especially during high-pressure cyberattack scenarios.

Finally, the rapid evolution of AI-enabled threats demands collective action. Strengthening collaboration across industry sectors through robust threat intelligence sharing communities and coordinated defensive efforts is essential. Only through sustained industry-wide cooperation can organizations hope to effectively counteract the unprecedented speed, precision, and scale of AI-driven cyberthreats.

Contributors
Andrey Suzdaltsev

CEO, Co-Founder, Brightside Technologies SA

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSAC™ Conference, or any other co-sponsors. RSAC™ Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs