Ethical Bias in AI-Based Security Systems: The Big Data Disconnect


Posted on in Presentations

We must accept we don’t know how AI systems make decisions and embedded Big Data bias is often the root of error in security systems. Machine learning relies on datasets fed into algorithms to execute the system’s learning. This talk will trace the various paths that sources of biases take through AI-powered systems and examine a proposed a framework to measure and eliminate bias in data and algorithms.

Learning Objectives:
1: Become aware that we don’t really know about how AI-based decision systems actually make decisions.
2: Learn how data, human and cultural biases cause errors in the decisions of ML/DL/AI systems.
3: Learn about a new framework to measure and quantify bias in the development of AI-based security systems.

Pre-Requisites:
Network defensive techniques, the basics of big-data science and some knowledge of how AI sysyems work.

Participants
Clarence Chio

Participant

Co-Founder, CTO, Unit21

Winn Schwartau

Participant

Chief Visionary Officer, SAC Labs

Analytics Intelligence & Response

security operations operational technology (OT Security) ethics cloud security artificial intelligence & machine learning


Topic

Subtopic


Share With Your Community