Ethical Bias in AI-Based Security Systems: The Big Data Disconnect

Closed captioning will be available in English and Japanese for all keynotes and RSAC track sessions.
Please note: All times are in SGT.
  1. Moscone West

We must accept we don’t know how AI systems make decisions and embedded Big Data bias is often the root of error in security systems. Machine learning relies on datasets fed into algorithms to execute the system’s learning. This talk will trace the various paths that sources of biases take through AI-powered systems and examine a proposed a framework to measure and eliminate bias in data and algorithms.

Learning Objectives:
1: Become aware that we don’t really know about how AI-based decision systems actually make decisions.
2: Learn how data, human and cultural biases cause errors in the decisions of ML/DL/AI systems.
3: Learn about a new framework to measure and quantify bias in the development of AI-based security systems.

Network defensive techniques, the basics of big-data science and some knowledge of how AI sysyems work.

Download pdf