The Case for AI Controls: Why AI Needs a New Governance Mindset


Posted on by Marina Bregkou

The rapid integration of artificial intelligence across business operations is nothing short of transformative. Security teams are being asked to protect systems they did not design, using models they did not build, serving use cases that are evolving in real time.

There is the need for a coherent, shared way to think about AI system security — one that starts with security principles and ends with practical, operational controls.

Unlike conventional applications, AI systems are often composed of multiple interdependent services: model APIs, orchestration layers, plugins, retrieval systems, and front-end apps. Further, these are sometimes owned by different teams and often powered by third-party components.

In this complexity, responsibility becomes unclear. Who owns the red teaming model? Who monitors output risk? Who is accountable for plugin security or fine-tuning data provenance?

Often, the answer is vague or missing. And when responsibility is unclear, so is risk ownership. Trust becomes a central issue — not just in what the models can do, but in how we govern them.

To meet this challenge, we need a shared, structured way to think about AI security — one that starts with foundational security principles and leads to operational controls that can be applied across the full AI stack.

From Principles to Practice: What Security Teams Can Do

We have spent years developing security frameworks for the cloud and bringing much-needed structure, accountability, and repeatability to the management of it. Now it is time to apply that discipline to the AI stack. This could be how:

1. Map the AI Stack Like an Application Stack

Just like cloud-native applications, LLM services have architectural layers: training infrastructure, APIs, plugins, orchestrators, and user-facing apps. Security teams should map these layers and identify which threats are most relevant at each level. For instance:

  • Model theft might be most relevant at the inference API.
  • Data poisoning starts upstream in the training data pipeline.
  • Denial-of-Service (DoS) threats spike at the app and orchestrator layers.

Securing the model in isolation is not enough, we must secure the Gen AI stack as well.

2. Define Shared Responsibilities

AI systems demand a rethinking of the shared responsibility model.

Models may be developed by one team, fine-tuned by another, deployed on shared infrastructure, and accessed via applications built by yet another group. The training data might come from open sources or third parties. Plugins and orchestrators can introduce functionality and risk that is not fully understood or controlled by the original model provider. For example:

  • The model provider should validate training data integrity and manage fine-tuning security.
  • The application provider should handle prompt filtering, input validation, and downstream abuse prevention.
  • The infrastructure team should own deployment hardening and resource isolation.
  • The orchestrated service provider should apply runtime guardrails.

All these distinctions need to be explicit.

3, Align Controls to Threats, Not Just Checkboxes

Instead of focusing on compliance for its own sake, map controls to threat categories like:

  • Prompt injection
  • Model failure/malfunction
  • Sensitive data disclosure
  • Insecure plugins
  • Supply chain compromise

This keeps the control framework rooted in real-world adversarial scenarios. It also makes it easier to prioritize mitigations based on threat relevance.

Instead of Chasing Threats, Build for Resilience.

Perhaps we cannot enumerate every new AI threat before it causes damage. The attack surface is too novel and the techniques too creative. But we can build AI systems that are resilient — systems that assume failure modes will happen and are designed to contain, detect, and recover from them.

Some best practices that could help:

  • Implement control ownership across lifecycle phases, not just per team.
  • Red team the AI stack end-to-end — including plugins, orchestration layers, and user-facing prompts.
  • Establish AI-specific observability metrics such as prompt logs, output filtering effectiveness, and model drift indicators.
  • Build in human supervision, especially for the moments when the model fails silently or gets tricked.

Moving From Control Confusion to Control Confidence

We need to create structure. Effective AI governance requires clear security control expectations, mapped to real-world threats, with ownership divided according to how organizations actually build and deploy these systems today.

And with the right structure, we can build trust into AI — not after the fact, but by design.

Contributors
Marina Bregkou

Principal Research Analyst, Cloud Security Alliance

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSAC™ Conference, or any other co-sponsors. RSAC™ Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs