Securing AI: There Is No Try, Only Do!


Posted on in Presentations

LLMs, like other ML algorithms, suffer from foundational security vulnerabilities. Their weakness against adversarial perturbations (prompt injection), lack of separation between data and control plane, and memorization of training data will present many challenges to enterprises that want to adopt them. This session will delve deep into what makes LLMs insecure and what will take to secure them.

Access This and Other RSAC Conference Presentations with Your Free RSAC Membership

Your RSAC Membership also includes AI-powered summaries, mind maps, and slides for Conference presentations, Group Discussions with experts, and more.

Watch Now >>
Participants
Saurabh Shintre

Speaker

CEO, Realm Labs


Share With Your Community