Mitigating Misuse as Access to Machine Learning Models Proliferates


Posted on in Presentations

Machine learning models are more ubiquitous and accessible than ever before due to open source checkpoints, cloud-hosted MLaaS APIs, and freemium security products. The security implications of these powerful yet easily accessible tools have been overlooked, but we will outline a set of guardrails to help ensure AI systems are built responsibly and remain safe, secure, and socially beneficial.

Participants
Ariel Herbert-Voss

Research Scientist, OpenAI

Philip Tully

Manager, Data Science, FireEye

Cloud Security & Virtualization Human Element Machine Learning & Artificial Intelligence

cloud security social engineering standards fake news/influence operations artificial intelligence & machine learning


Topic

Subtopic


Share With Your Community