CISO Perspectives: Practical Tips for Securing Generative AI Systems


Posted on by Laura Robinson

How do Fortune 1000 companies build security into generative AI (GenAI) systems?

This week, we’ll learn from the experience of a company that got out of the gate early and has already rolled out successful GenAI initiatives. In recent discussions of the RSAC Executive Security Action Forum (ESAF)1 community of Fortune 1000 CISOs, a leading CISO shared how their team built security into a new internal customer service bot which serves their global workforce of over 45,000.

This blog series is based on firsthand experiences shared at invitation-only ESAF sessions by Fortune 1000 CISOs. We’ve summarized those discussions for the benefit of the wider security community. Details have been anonymized to preserve confidentiality.

Customer Service Bot

The CISO started by providing an overview of the GenAI system. Their internal customer service bot was developed using a commercial GenAI chatbot platform and trained on company-specific IT workflows and processes. It assists employees with tasks such as approval requests for access to resources and helps resolve technical issues such as trouble with software applications or computer hardware. The system runs in a managed cloud and integrates over 20 enterprise resources and 100 external knowledge resources. 

No personal, regulatory, or customer data is allowed in the system. However, because the system can be used for sensitive tasks such as changing forgotten passwords, strong security is critical.

Generative AI Assessment

The CISO emphasized that, in addition to verifying the GenAI system is fit for use, it’s necessary to assess its trustworthiness. The project team evaluated the GenAI platform against the company’s AI ethical framework, a set of criteria for assessing AI. Their questions included:

  • Are the answers that the system provides understandable, transparent, and auditable?
  • To what degree are the answers accurate? Is the level of hallucination acceptable for the task?
  • Is it resilient? Does it perform consistently in unpredictable scenarios?
  • Does it behave responsibly, not insulting anyone or showing bias?
  • Is the training dataset secure against data poisoning attacks?
  • Considering how critical the decisions are and how much the system can be trusted, would human-in-the-loop or human-on-the-loop controls be appropriate for certain workflows?

Using an ethical AI framework provides:

  • Structure for responding when board members and customers ask CISOs how they know the company’s use of GenAI is trustable.
  • Methodology for assessing GenAI against increasingly stringent government regulations. 

Securing Generative AI Data Flows and Access

Aside from the ethical issues, the CISO said securing GenAI systems is similar to securing SaaS systems. However, security teams should pay particular attention to the following areas:

Access Control To reduce the risk of prompt injection attacks, use data classification and access controls to ensure that the system cannot give out information that the user does not have access to.
API Security GenAI systems involve many connection paths, any of which can be attacked. Secure and test all API connections.
Authentication Attackers are directly targeting GenAI systems and are bypassing MFA. Ensure integrations are configured correctly. Monitor for out-of-bounds behavior, e.g. API calls coming from unexpected places.
Data Protection Design the system to minimize data persistence.
Logging Log events from every integration point and correlate them in your SIEM. Look especially for out-of-band and out-of-bounds requests and reuse requests. If necessary, work with your vendors to capture the necessary logs. Make sure you can see all data that is sent to the vendor.
Testing The security team found a critical security flaw in the chatbot platform that made it vulnerable to clickjacking attacks. They alerted the vendor and implemented compensating controls until the vendor fixed the problem. The CISO emphasized, “This is why you do the work. The devil is in the details.”

 

Ongoing Model Tuning

The models require constant tuning to ensure they are effective and trustworthy. Otherwise, they tend to become less accurate over time, a phenomenon known as model drift.

To enable tuning, the CISO recommends regularly collecting data on how the chatbot is being used and how accurate answers are. This includes collecting and analyzing the users’ prompts to the chatbot and their feedback ratings on the answers and providing feedback to the chatbot platform vendor so that the vendor can assist with model and algorithm enhancements.

Project Outcomes and Future Directions

Full ROI metrics for the GenAI-powered chatbot will require more data, however a positive sign is a 40% reduction in help desk calls in the first four months after rollout. Accuracy for the most common tasks is above 90%. The company plans to expand the system from 20 to a total of 70 integrations.

According to the CISO, “We have to continue to stay on the forward-leaning side of this. Otherwise, the demand will outstrip our ability to do this in a safe manner. We need to manage both the technical baseline and the expectations of responsible use.”

Up Next: Security Use Cases

So far in our series, we’ve focused on protecting the business’s use of GenAI. In next week’s post, we’ll look at how security teams intend to use it. And for our last installment, we’ll discuss the adversaries’ use of GenAI. As ESAF CISOs see it, this is a race that will deeply change enterprise information security.

If you missed them, check out our earlier posts on the Risks of Rapid GenAI Adoption and GenAI Governance.

Read more from the RSAC ESAF community of Fortune 1000 CISOs in the CISO Perspectives series.

__________________________________________

ESAF is an international community. It consists of CISOs from Fortune 1000 companies and equivalent-sized organizations.


Contributors
Laura Robinson

ESAF Program Director, RSA Conference

Machine Learning & Artificial Intelligence

Artificial Intelligence / Machine Learning ethics government regulations authentication access control innovation

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs