Unpickling PyTorch: Keeping Malicious AI Out of the Enterprise


Posted on in Presentations

PyTorch is a go-to framework for organizations building their own LLMs. But, how can anyone be sure they aren’t accidentally running malicious code? Existing tools that examine the unpickling process produce false positives, posing more questions than answers. This session will demonstrate a new method for developers to identify and extract malicious code before it runs, keeping AI models safe.

Access This and Other RSAC Conference Presentations with Your Free RSAC Membership

Your RSAC Membership also includes AI-powered summaries, mind maps, and slides for Conference presentations, Group Discussions with experts, and more.

Watch Now >>
Participants
Trevor Madge

Speaker

Back-End Data Engineer, Sonatype

Andrew Stein

Speaker

Principal Software Engineer, Sonatype


Share With Your Community