Coordinated Disclosure for ML: What's Different and What's the Same


Posted on in Presentations

Red teaming is a validation step, not an accountability mechanism. A coordinated disclosure process for machine learning systems is needed to ensure safe, secure,  and effective ML models. This session will cover the history of disclosures for machine learning systems, how they work and what makes them different from the coordinated vulnerability disclosure process of traditional software.


Participants
Sven Cattell

Speaker

Founder, AI Village


Share With Your Community