Posted on
in Presentations
Adversarial attacks exploit vulnerabilities in large language models (LLMs), including bias manipulation, jailbreaks, prompt injection, and PII leakage. This session will introduce two frameworks: one for automatic jailbreaking and another for detecting and preventing attacks. Learn actionable strategies to secure AI models and protect sensitive data from evolving threats.
Access This and Other RSAC™ Conference Presentations with Your Free RSAC Membership
Your RSAC™ Membership also includes AI-powered summaries, mind maps, and slides for Conference presentations, Group Discussions with experts, and more.
Watch Now >>
Share With Your Community