How are CISOs at Fortune 1000 companies approaching generative AI (GenAI)?
Recently, GenAI has been one of the most intensely-discussed topics in the RSAC Executive Security Action Forum (ESAF) community of Fortune 1000 CISOs.1 The confidential nature of ESAF sessions enables participants to speak candidly about their firsthand experiences.
In this blog series, we will share highlights of those conversations. For ESAF CISOs, GenAI creates three security imperatives:
1. Protect the use of GenAI
2. Protect with GenAI
3. Protect against GenAI
Over the next four weeks, you’ll read about their early strategies to help their companies deploy GenAI safely across the business, to build better defenses with GenAI, and to confront GenAI-enabled threats.
We’ve summarized their discussions and anonymized details to preserve confidentiality. We’ll also be sharing results from RSAC’s recent Fortune 1000 survey on GenAI.
What is Generative AI? (GenAI)
Generative AI (GenAI) is a form of artificial intelligence that can create new content such as natural-sounding text, audio, pictures, and code. GenAI algorithms are trained on large amounts of data. When given a natural-language prompt, GenAI systems predict what output a human would give, based on their training data.
Business Use Cases for GenAI
At most large enterprises, GenAI is rapidly being adopted for business applications. In fact, RSAC’s recent Fortune 1000 survey on GenAI showed that 100% of companies plan to implement GenAI in the next 12 months for a range of business use cases.2
The top use cases that companies plan to focus on are in employee productivity, software developer productivity, and customer service (see Figure 1). However, priorities could change in the coming months as they evaluate a growing number of use cases.
Top Security Risks of GenAI Adoption
As GenAI is being rapidly adopted, CISOs play a key role in enabling their companies to attain the benefits of GenAI while effectively managing the risks. Different use cases for GenAI introduce different types of risk. For example, without adequate controls, an IT help desk system might grant access to unauthorized users, or a customer service chatbot might disclose confidential company information.
Given the range of risks that companies are facing, what risks are the most concerning to Fortune 1000 CISOs? The recent RSAC survey shows their top concerns are data leakage and unvalidated or hallucinated data being used in decision-making (Figure 2). “Hallucination" refers to the phenomenon in which GenAI systems produce information that sounds plausible but is actually false.
Potential for Unmanaged Risks in GenAI Adoption
In their recent discussions on GenAI, ESAF CISOs emphasized the potential for unmanaged risks. In addition to formally managed GenAI projects, GenAI is being used across an organization’s ecosystem. Tools such as ChatGPT are easy for individual employees to use under the radar and GenAI features are being added to many business and consumer software applications. Organizations also face risks as suppliers and partners adopt GenAI tools for their business operations. For example, enterprise data that is shared with third parties could be at risk of leakage if a third party puts the data into GenAI systems.
Foundational Principles and Policies for Safe GenAI Use
Since GenAI is so easy for individual employees to start using, mass-education regarding principles and policies is critical. ESAF CISOs discussed ways to provide an environment where employees are encouraged to use GenAI while setting a clear tone around what they are – and are not – allowed to do.
Early steps that their companies are taking include:
- Developing a document on the ethical use of AI which states the principles that all employees are responsible for following, i.e. AI usage should be beneficial, equitable, transparent, responsible, and accountable.
- Creating clear communications and training around acceptable use of public GenAI tools for example:
- Requiring that no proprietary, confidential, or sensitive information should be submitted to public GenAI tools without corporate approval.
- Outlining which large language models (LLMs) to use for which purposes and in some cases blocking access to LLMs that have not been assessed.
- Leveraging big names in the company to get the messages across for example:
- Having the CIO and CTO send a joint email to all employees outlining the policies.
- Implementing ways to display a reminder message about appropriate use when employees visit public GenAI websites and keep logs of all usage.
Evaluating Third-Party Use of GenAI
ESAF CISOs also discussed augmenting third-party risk assessments to evaluate AI capabilities in products and services. If a vendor has added AI capabilities, they should:
- Provide sufficient documentation to assess the AI’s capabilities and limitations.
- Explain the provenance of the data used to train the AI.
- Demonstrate the effectiveness of the risk management measures they put in place.
Frameworks for assessing AI in third-party products are emerging. One example is the US Federal Government’s Office of Management and Budget Policy on AI which includes a section on “Managing Risks in Federal Procurement of AI.”
Third-party risk assessments should also verify that if enterprise data is shared with a third party, no enterprise data will be put at risk through the third-party’s use of GenAI tools.
New and Emerging AI Regulations
Another fundamental aspect of managing the risks of GenAI is ensuring the company’s use of GenAI complies with regulations. The global regulatory landscape is evolving quickly. Here are just a few examples of legislative initiatives that large global companies are tracking:
Jurisdiction | Legislative Initiatives | Timeline |
Brazil | Proposed AI regulation: Bill of Law 2338 | Introduced May 2023. |
Canada | Proposed Artificial Intelligence and Data Act | Introduced June 2022. Amended November 2023. |
China | Various regulations, e.g., Generative AI Measures and Ethical Review Measures | Effective August 2023 and December 2023. |
EU | Artificial Intelligence Act | Entered into force in August 2024. |
US Federal | Executive Order on AI | Signed in October 2023. |
US States | 17 states have enacted bills | Proposed mostly in 2024. |
Key Takeaways
- CISOs’ top concerns with the business’s use of GenAI are data leakage and unvalidated or hallucinated data being used in decision-making.
- Fast-paced GenAI adoption in the extended enterprise raises the potential for unmanaged risks. Besides formally managed GenAI projects, GenAI tools are easily used by individual employees and third parties.
- Foundational elements of GenAI risk management include training employees on acceptable use of GenAI, assessing third party use of AI, and paying close attention to emerging AI regulations.
Up Next: GenAI Governance
With GenAI rapidly being adopted across the business, ESAF CISOs see the potential for GenAI uptake to easily outpace a company’s ability to establish adequate risk management. Next week, we’ll look at how one company has established an overarching governance strategy to help rein in the chaos.
Read more from the RSAC ESAF community of Fortune 1000 CISOs in the CISO Perspectives series.
________________________________________________________________________________________
1ESAF is an international community. It consists of CISOs from Fortune 1000 companies and equivalent-sized organizations.
2Survey of 100 Fortune 1000 CISOs conducted by RSA Conference for an internal research study in Q2 2024.