Addressing vulnerabilities is always a matter of tradeoffs.
If you patch every vulnerability, you aren’t being very efficient. If you’re more efficient, you might not remediate as many vulnerabilities. Remediating faster won’t guarantee risk reduction.
What matters is that you’re taking a risk-based approach in the first place, targeting the vulnerabilities that are most likely to be exploited and understanding how much risk your organization can stomach.
To that end, there are four factors you can start measuring in your organization to gauge the performance of your remediation. From there, you can strike a balance that helps you achieve greater performance.
These factors are based on extensive observational data that we’ve collected for greater insight into what makes vulnerability management work and why some organizations are more successful than others.
Start measuring these in your organization:
How complete are your remediation efforts? What percentage of exploited or “high-risk” vulnerabilities were actually remediated? If your organization is risk averse, this will be an important one to keep track of.
When we examine coverage, we find organizations at every point along the scale from 0% to 100%. Those at the extremes have very few assets under management. The peak density sits at 82% coverage. Most organizations are successfully addressing the large majority of their risky vulnerabilities.
Keep in mind that trying to remediate all high-risk vulnerabilities requires spending money and time on remediation. In the chart shown, the circle shaded red represents the actual high-risk vulnerabilities based on exploit intelligence.
How precise are your remediation efforts? What percentage of vulnerabilities remediated were actually high-risk?
Efficiency shows whether you’re remediating the vulnerabilities that matter without a lot of extra effort. High efficiency means you’re spending the resources you have wisely. But depending on how focused your organization is on efficiency, your coverage and the actual number of vulnerabilities you remediate might not be as wide. Your risk could still be high if you focus just on efficiency.
While a number of organizations have strong coverage, not many cross the 50% line for efficiency, indicating a greater emphasis on coverage than efficiency. This makes sense. The cost of not fixing a vulnerability that is exploited is generally higher than fixing it proactively.
But it’s also low because patching is inherently inefficient according to how we measure it. Many patches fix multiple CVEs, so if the patch you deploy fixes five CVEs and only one of those is exploited, you technically chose “wrong” four out of five times. The efficiency metric reflects that penalty even though you didn’t explicitly choose to prioritize those other four. Keep this in mind as you measure your own efficiency.
How long does the remediation process take? Of the high-risk vulnerabilities you’re focused on, how quickly are you remediating them? Are you able to patch quickly enough to make the window of risk so small that it’s difficult for attackers to exploit those vulnerabilities?
Remediation timelines vary significantly across industries, company size and software. It’s not necessarily bad if you don’t close all vulnerabilities quickly. It would be wasteful to close them all as fast as possible, spending resources that would be better used elsewhere. More important is how quickly you fix the vulnerabilities that matter. Because of this, firms with higher remediation velocity tend to have lower efficiency, indicating a tradeoff similar to that of coverage and efficiency.
It turns out, your vulnerability management program can learn a lot from survival analysis.
How many vulnerabilities can you remediate and how much are high-risk vulnerabilities building up over time? Are you getting ahead of the game, treading water or creating vulnerability debt because you can’t keep up?
It’s tempting to assume those exhibiting higher remediation capacity must have less infrastructure to manage, but the data doesn’t support that conclusion. Average capacity remains remarkably consistent, regardless of characteristics like organization size, number of assets and total vulnerabilities. But while it’s consistent, it’s not set in stone.
Organizations can take steps to improve their capacity. In fact, we found that most organizations have the capacity to remediate roughly 1 in 10 open vulnerabilities in a given timeframe regardless of their size. However, top performers were considerably better, remediating about 1 in 4 open vulnerabilities.
How to Get Started
The first step is to benchmark where you are with respect to each of these metrics. Measure what you’ve remediated in the past 90 or 180 days if you have the data available. Look at the high-risk vulnerabilities, and start to understand your efficiency, coverage, velocity and capacity. From there, use those metrics to guide improvements.
Every organization is different in terms of risk tolerance. Your size, industry and culture all matter. Budgets matter too. How can your company’s budget reflect your organization’s risk tolerance? Those with a limited budget might want to start with efficiency where they can get the most bang for their risk remediation buck, while those with an extremely low risk tolerance might focus on coverage as a starting metric.
Ultimately, you’ll have to figure out the right balance of the four metrics. Once you understand where you are, you’ll be able to make improvements, decide how best to make use of your budget and pursue more effective vulnerability remediation.