Peers Discuss Vulnerability/Risk Scoring and What Ratings Really Mean

Posted on by RSAC Contributor

Security professionals break into small groups to discuss specific topics of interest during the RSA Conference Peer-2-Peer sessions. Tyler Reguly, manager of security research and development at Tripwire, facilitated a P2P discussion about scoring vulnerabilities and risk. Read on for Tyler's thoughts about the discussion.Security metrics

This year, at RSAC 2015, I was fortunate enough to host a Peer-2-Peer session on vulnerability and risk scoring. I wasn’t sure what to expect going into the session, it was my first P2P experience and I had an idea in my head of where the conversation would go. That said, I didn’t want to dictate the experience for everyone else, rather I wanted to facilitate an interesting and informative conversation.

When I put the session together, I had envisioned a room full of geeks like myself. I had expected to talk about hard metrics like CVSS and other scoring systems along with the impact that media hype has on public perception versus true risk. Instead the conversation steered more toward vulnerability management programs and triaging their results. Given my day job, this was a conversation I enjoyed, but I tried very hard not to direct the conversation. I wanted any discussion that we did have to occur as organically as possible. This worked really well because a large number of attendees were well versed in vulnerability management and it felt like most of them experienced the day-to-day issues that come with running an enterprise vulnerability management program.

Here are two of the more intriguing questions that were discussed for a good portion of the allocated time but were difficult to truly answer. 

  • How do you know how long you can wait before patching?

The question here is complex when you consider the rules and requirements laid out by internal best practices and enterprise change control along with external requirements from various certifying bodies, like PCI. Just how long can you allocate to testing patches before deploying them? Does the criticality of the patch change the cycle? It’s an interesting question and one that has occupied much of my time since RSA. I hope to one-day have an interesting answer. 

  • Why doesn’t scoring account for patch complexity?

Some patches are simple to install: apply the patch and reboot. Sometimes you don’t even need to reboot. What happens if the resolution is more complicated? What happens if you need to patch the OS, a product running on the OS, and then apply a specific configuration? This takes time and could delay the application of other updates. How do you remedy this? How do you ensure that your vulnerability management program is presenting scores that show you quick wins? Again, an interesting question that I hope to one-day answer.

In the end, I’m not sure that we came up with much of a solution for either of these questions. One thing we were able to discuss as a solution to minor issues, which several people were already doing, was using a heat map for prioritization. Rather than worrying about the actual score, you group the scores on a grid with the x-axis and y-axis representing two aspects of the score. You then prioritize updates by working across the heat map. This provides a much more accurate view of the vulnerabilities on your network and the proper patch order than a simple ranking of numbers will, assuming you select the correct values for your x and y-axis. 

I really enjoyed the conversation and hope to be able to return to RSA to facilitate a follow-up session next year. If you weren’t able to stop by and join in a P2P session this year, I highly recommend that you check them out in the future. This was probably the most fulfilling conversation I had during the week of RSAC 2015. 

RSAC Contributor

, RSA Conference

More Related To This

Share With Your Community