For many years, information risk management (IRM) has been an evolving discipline. Never having been quite as advanced as financial or operational risk-modeling capabilities within the enterprise, IRM has often been relegated to a more esoteric, simplistic role in organizations. At this year's RSA Conference 2014 in San Francisco, however, the evolving—and improving—maturity of IRM in the enterprise was made crystal clear.
Historically, the fundamental equation for calculating risk has been pretty simple: risk = [likelihood of threat] x [impact of threat]. Generally speaking, this model has been the de facto standard across all types of enterprises. Sometimes, a more mature risk model will (as it should) take into consideration other factors, such as the value of the asset.
Of course, some people in information security, including some major industry luminaries, don't even believe that IRM is possible. As Jeff Lowder pointed out during his presentation at the RSA Conference, "Assessment Pitfalls for Risk Managers," luminaries such as Donn Parker and Marcus Ranum have stated in the past that information risk is too complex of a calculation for large enterprises to be able to make because it's based on far too many qualitative (as opposed to quantitative) measurements. However, many risk management practitioners—myself included—would disagree. Problems with IRM actually reducing risk are often related to the effectiveness of the equation, not necessarily the risk programs that implement them.
Before discussing the changes that need to be made to risk models, it's important to differentiate the quantitative versus qualitative data that goes into them. Some risk professionals suggest that only quantitative data should be used in calculating information risk; others suggest that getting a sufficient granularity of quantitative data is too difficult and that qualitative approaches are more effective. The truth, as is often the case, lies somewhere in the middle. Both measurable quantitative data and qualitative aspects of data, systems, and business operations are required to achieve the most effective estimates of risk.
And here is where an update to the risk equation can help. Summer Fowler, deputy technical director at Carnegie Melon's CERT program, delivered a presentation at the RSA Conference 2014 called, "It's Not All Academic: A Case Study on Implementing a Cyber Risk Management Program." CERT, one of the most well-known organizations in the risk and threat analysis field, has developed a new model for overall business risk, called the Resilience Management Model, which incorporates information risk into a more holistic, enterprise-wide approach that includes security, BCP/DR, and IT operations. CERT implemented this model at a major utility company, and the model demonstrated significant reduction in risk over time. The risk score is modified to include both quantitative and qualitative factors, such as imminence and individual business resilience scenarios, and is mitigated by the maturity of existing controls. In effect, the concept of "residual risk," which represents the net risk after controls are implemented, is baked into the model. This provides a more granular solution that, according to Fowler, more accurately reflects the real-world risks to not only assets, but overall business operations.
So what does all this mean? In the short term, it means that people are challenging the traditional information risk model to make it more effective. We have a lot of security tools and controls available to us, and yet we continue to see major threats become reality. Organizations are beginning to evaluate, and in some cases customize, the risk model to better match their business—and this is a very good thing. Long-term, it means that information risk practitioners will have more effective and proven models for risk management that do a better job of actually reducing risk.