Posted on
in Presentations
Attendees will learn how to use open source tools to red team AI models. Specifically, they will learn how to evade these models and strategies for poisoning and model stealing. The tools can be adapted to multiple environments, models, and data types. Attendees should leave with a new experience and perspective on AI security and how they can affect positive security changes in an organization.
Share With Your Community