CISO Perspectives: Tackling the Rise of AI-Powered Cyber Attacks


Posted on by Laura Robinson

What are Fortune 1000 CISOs doing about the increasing use of artificial intelligence (AI) in cyber-attacks?

CISOs are having a lot of conversations about this topic. It has even become a staple at board meetings, according to the RSAC Executive Security Action Forum (ESAF) community of Fortune 1000 CISOs.1 This week, we’ll explore how CISOs at the forefront of AI plan to protect against AI-enabled threats by building AI-powered defenses. 

This blog series is based on firsthand experiences shared at invitation-only ESAF sessions by Fortune 1000 CISOs. We’ve summarized those discussions for the benefit of the wider security community. Details have been anonymized to preserve confidentiality. 

Common AI Cybersecurity Threats

There’s no doubt that attackers are already using AI. In a recent survey of Fortune 1000 CISOs,2 RSAC found that 72% said they have already seen threat actors use GenAI against their enterprise (Figure 1).

Blog 5 esaf fig 1

For the CISOs in the ESAF community, they know that 5 to 10 years from now, they will face a vastly different threat landscape because of AI-enabled threats. It is difficult to predict precisely what will change, or when, therefore they must plan amidst uncertainty. 

Top Concern is AI-Powered Automated Hacking

Of the emerging attack techniques, a top concern is automated hacking. Powered by AI, attacks could unfold much faster than human security teams could respond.

Currently, sophisticated attacks are generally conducted in stages that can take days, weeks, or months. This gives security teams a window of time to detect an attack and contain it. At some point in the future, with AI-powered automated hacking, an attack might be completed in milliseconds.

To defend against this kind of attack, the end goal is to deploy defenses that respond instantaneously to match the speed of the malicious actor’s AI-driven attack. Human response is too slow; security teams will need an autonomous system working independently and reacting instantly to threats. Getting there will require building increasingly more powerful AI-enabled defenses over time.

A machine-versus-machine matchup requires security teams to achieve autonomous AI systems. One CISO said they consider AI systems to have four levels of capability: analysis, assistance, augmentation, and autonomy. In the near term, they see most security teams deploying AI systems with analysis and assistance capabilities. Higher-level capabilities are longer-term goals.

Building an AI-enabled Security Organization

Defending against AI-enabled threats requires building AI-enabled defenses. ESAF CISOs are taking steps to apply AI across security processes including:

Research Look at your security processes to determine where you should apply AI first and where you will get the greatest benefits. 
AI Training

Provide basic AI training for everyone and advanced training for specific technical teams. One CISO described “tons of training” as the number one way to prepare the security team for its mandate of becoming an AI-enabled organization. “Getting people ready is going to be the most difficult and the most impactful thing to get right. This cultural transformation needs to happen to get people excited, confident, and enabled to operate in this environment.”

  • Training for the whole team includes building a basic level of proficiency around AI. It also addresses the fear around job security, since it lays the groundwork for developing skills that will enable workers to transition into new roles.
  • Training for team members involved in securing AI includes threat modeling and secure design principles for securing AI data platforms, ML models, and AI-enabled applications.
  • Training for team members involved in implementing AI includes data management, data pipelines for ML; and building, training, testing, and deploying ML.
Security Data Preparation Standardize and optimize data formats and implement modern governance and data pipeline management so data such as security telemetry data can be used in AI tools. One CISO contends that data preparation is even more important than training. Be aware, CISOs who have begun data preparation at scale say the task is a bigger impediment than they had expected.
AI Assessment Develop a robust assessment methodology to evaluate AI tools that you build or consume. Ensure that AI tools are trustworthy so you can apply AI to your most sensitive security processes and decision support systems.
 

CISO Perspectives Series

Take advantage of all the hard-won experience of Fortune 1000 CISOs as they embark on their GenAI journeys. Check out our earlier posts on the Risks of Rapid GenAI Adoption, GenAI Governance, Securing GenAI Systems, and Transforming Security with GenAI.

Read more from the RSAC ESAF community of Fortune 1000 CISOs in the CISO Perspectives series.

___________________________________________

1 ESAF is an international community. It consists of CISOs from Fortune 1000 companies and equivalent-sized organizations.

2 Survey of 100 Fortune 1000 CISOs conducted by RSA Conference for an internal research study in Q2 2024.


Contributors
Laura Robinson

ESAF Program Director, RSA Conference

Machine Learning & Artificial Intelligence

Artificial Intelligence / Machine Learning hackers & threats phishing Security Awareness / Training governance risk & compliance vulnerability assessment

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs