Having served on the RSAC program committee overseeing the machine learning/artificial intelligence (ML/AI) track multiple times has provided me a unique perspective on the submission trends. Overall, it was considerably easier to review the RSAC 2019 abstracts. First, there was a significant number of “fluff” pieces alluding to Skynet (machines rising up in the Terminator movies) that were obvious drops.  In addition, a large amount of submissions inferred that “ML/AI” will do everything under the sun and could be eliminated as well. This year there was a much higher percentage of interesting, viable submissions, which makes the session selection much harder. In fact, rather than the “fluff” pieces submitted last year, there was a noticeable increase of academic/research submissions covering theorical and practical uses of ML/AI. Keeping with the diverse audience at RSAC, we focused on the more practical submissions.

Selecting the right speakers and topics is a delicate balance. We want to target information security and related professionals of various backgrounds, roles and industries. For a relatively new topic like machine learning/artificial intelligence, the sessions need to have something people can take away.  What does that mean for RSAC this year? Sessions selected this year covered key topics, including:

  • Security of ML/AI models. Working in healthcare, there is huge potential in leveraging ML/AI to provide better patient care. For example, studies have shown models can augment radiologists to better detect cancer nodes in lung images. Introducing a new technology can introduce security risks, so security professionals need to understand the ramifications.
  • Sorting through the fluff. Looking at marketing claims, it seems like ML/AI is now part of products ranging from identity management to toothpaste. While there are practical advantages to ML/AI, buyers need practical advice on how to cut through the hype.
  • Reproducibility and repeatability. When depending on ML/AI models, the reproducibility of an outcome is expected; however, that is not guaranteed. Understanding reproducibility and repeatability throughout the lifecycle of a model lead one step in a successful deployment.
  • The role of ML/AI for security teams. No ML/AI track would be complete without the requisite discussion over how ML/AI can be leveraged by security teams. While not a silver bullet, there are use cases where ML/AI can enhance a team’s efficiency and effectiveness. 
  • Training data. ML/AI models are built on relevant training data. In healthcare, this data often has Protected Health Information that needs to be kept private. However, privacy attacks exist that could reveal the underlying data used to build a model. Understanding how these attacks work can ensure training data is appropriately protected.

Now that the RSAC 2020 agenda is live, you can take a look at the ML/AI sessions available—see you in February.

Contributors: