Responsible AI: Adversarial Attacks on LLMs


Posted on in Presentations

Adversarial attacks on LLMs are advancing at a rapid pace, and understanding the challenges is critical to future implementations. Join this session focused on the pivotal research paper Universal and Transferable Adversarial Attacks on Aligned Language Models to hear a recap on the latest on this ongoing research project and its significant impact on the cybersecurity industry and responsible AI.

Participants
Saurabh Shintre

Moderator

CEO, LangSafe

Matt Fredrikson

Speaker

Associate Professor, Carnegie Mellon University


Share With Your Community