Metric Madness: Measuring Success

Posted on by RSAC Contributor

By Tyler Reguly

Metrics for Managing and Understanding Patch Fatigue was ultimately a conversation on how businesses can measure success in their Vulnerability and Patch Management strategies. 

This year, at RSAC 2017, I hosted a Peer-2-Peer session on Metrics for Managing and Understanding Patch Fatigue. I saw this as an extension of my RSAC 2015 P2P on vulnerability and risk scoring. In 2015, I had a clear vision for the conversation and watched as it moved in a completely different direction. This led to a very interesting conversation but also helped me set my expectations for this year’s session. 

Once again, I made assumptions about the direction of the conversation and, to facilitate discussion, created a single page cheat sheet that I could use to drive the conversation. I ended up making little use of the document as we ran out of time while we were still mid-conversation. After running two of these sessions, I feel like 45-minutes just isn’t long enough for these conversations and wish that RSAC would consider extending the duration of these sessions in the future to be similar to the two-hour labs. Ultimately, setting them up to be 1-hour conversations but allow for the room to be available for up to an extra hour to facilitate extended conversations. 

While I had wanted to focus on Patch Fatigue, a topic of great interest to me, the conversation stayed focused toward the metrics side of the discussion. Nearly half of the attendees were actively involved in the conversation, while many others were nodding in agreement or jotting down notes. It was great to see a full room with so much active participation. 

Instead of a conversation around how Patch Fatigue impacted teams and how management could help alleviate this, we looked at how you measure your success. How do you know that your patch management or vulnerability management program is successful? Do you benchmark internally, within your vertical, or against similarly sized companies? Are there industry standards that require you benchmark against specific data sets or require you meet specific conditions? 

While a number of attendees shared what they are doing (either mandated by regulations or after weighing the options) and others picked up new approaches to try, I appreciated that the conversation skewed away from my area of expertise. I’m not directly responsible for vulnerability or patch management; instead I work in the vulnerability management space. So, for me, seeing how people are using the tools and what works for them made this an interesting conversation. 

At the end of the day, the realization was quite simple in my mind. Success measurement and benchmarking still have a long way to go until we get to a point where the programs are mature enough for enterprises to rely on them for validation. Going forward for both the attendees and, ultimately, all users of VM and PM solutions, the real goal is to define the criteria that work with you... share what works and what doesn’t work and communicate that back to your vendors. Once we’re comfortable with vulnerability and patch management, we can start to reduce Patch Fatigue.

RSAC Contributor

, RSA Conference

More Related To This

Share With Your Community