Securing AI: There Is No Try, Only Do!


Posted on in Presentations

LLMs, like other ML algorithms, suffer from foundational security vulnerabilities. Their weakness against adversarial perturbations (prompt injection), lack of separation between data and control plane, and memorization of training data will present many challenges to enterprises that want to adopt them. This session will delve deep into what makes LLMs insecure and what will take to secure them.

Participants
Saurabh Shintre

Speaker

CEO, LangSafe


Share With Your Community