Posted on
in Presentations
LLMs, like other ML algorithms, suffer from foundational security vulnerabilities. Their weakness against adversarial perturbations (prompt injection), lack of separation between data and control plane, and memorization of training data will present many challenges to enterprises that want to adopt them. This session will delve deep into what makes LLMs insecure and what will take to secure them.
Share With Your Community