Over the last few months, I’ve written frequently about cybersecurity frameworks, such as the new Framework for Improving Critical Infrastructure Cybersecurity. As a way to generate discussion, engage a non-technical audience, and serve as a starting point for tackling an organization’s cybersecurity risks, it is a useful document. But as its authors readily admit, it is not intended to be a normative document. No one should use it, by itself, as a checklist to determine whether an organization has adequate security. Instead, organizations should use it to identify detailed controls that are appropriate and subject to rapid change as threats evolve and as the nature of the organization changes. Over the years, it has become clear that regulators and standards organizations cannot move fast enough to deal with the ever-changing threat landscape. Moreover, not all controls are equal and not all organizations are equal. Some organizations need detailed policies and procedures for everything an organization does, while others have a culture that facilitates rapid response to threats without a lot of documentation. While the latter organizations still need a defined governance structure and mechanisms for incentivizing positive behavior and punishing negative behavior, the need for detailed policies and procedures will be much less.
It is with that understanding that the Common Security Framework (CSF) developed by the Health Information Trust Alliance (HITRUST) raises so much concern. As a framework comprised of many other frameworks, it provides valuable insights that healthcare organizations can use to select the appropriate controls to use. However, its authors have more ambitious goals. It is now being used as normative document with mechanisms for third-party assessments and certification. The certification is also being leveraged as a way to demonstrate compliance with Texas regulations relating to the protection of health information. The inevitable outcome of this process will be healthcare organizations seeking to use the certification as safe harbor in the event of a cybersecurity breach rather than focus on ways of preventing those breaches. The reality is that any cybersecurity framework, when used for compliance purposes, inevitably forces organizations into a checkbox mentality that discourages innovation, causes wasteful spending, and increases cybersecurity risk. In contrast to building codes, safety rules, and hygiene guidance, security deals with sentient actors looking to do harm. Those actors alter their behavior in response to mandatory controls while storms and diseases do not. So, introducing a certification standard that actually has a HITRUST Alternate Controls Committee to review and approve compensating controls makes little sense. Instead, assessors should press the organization to provide an appropriate risk-based justification for the deviation. Even regulators in the financial services and electricity sectors have that flexibility.
However, the biggest challenge with the certification program is it represents both a very comprehensive framework and a huge scope for most healthcare organizations that handle protected health information (PHI) almost everywhere. By contrast, the electricity industry’s North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) Standards apply to a fairly small percentage of devices at a typical utility. The Payment Card Industry Digital Security Standard (PCI-DSS) applies to an even smaller number of devices that process credit card information. Regulations in the financial services industry have broader application, but they are higher level and therefore offer a great deal of flexibility. And of course, PCI has faced a constant barrage of criticism as companies supposedly in compliance are frequently breached and assessors are regularly accused of taking shortcuts in their assessments to keep the price down. NERC CIP assessments are frequently viewed as onerous and nitpicky, with limited benefits for cybersecurity for all but the weakest programs. The International Standards Organization’s ISO 27001 standard and associated certification fares a bit better, as it seeks to confirm that the appropriate risk management processes are in place and creating the right artifacts without arrogantly assuming it is the authority on all the correct controls. It also does not presume that a certified program is anything more than a starting point when it comes to good cybersecurity. And of course, the federal government’s certification and accreditation program, the only one more prescriptive and comprehensive than the HITRUST one, consistently fails to demonstrate measurable improvements in cybersecurity and is incredibly costly. Consequently, it is being overhauled to encourage automation of controls verification and reporting. It’s a step in the right direction but one fraught with landmines at every turn, as more automation means more deficiencies reported and more data to sort through that hopefully will improve security but at the expense of significantly increasing costs unless there are sufficient mechanisms to correlate data and identify systemic issues amidst all the noise.
So, now we see HITRUST embarking on a potentially mandatory certification program (if other states follow Texas’ lead) that is very prescriptive and affects nearly every part of a healthcare organization’s operations. But that is only part of the problem. When we look closer at the elements of the program, we find a number of other concerns.
Risk Assessment Methodology
The HITRUST Risk Guide is a very elaborate document with many well-thought-out elements. I particularly like the password entropy discussion, even though passwords make a very easy example to use when discussing risk given the wealth of quantitative information available. Nonetheless, like similar discussions, it is largely devoid of context and misses one of the increasingly common attack vectors we face with passwords: the key logger. Password entropy matters little if an attacker can simply capture a user’s keystrokes. But I realize the password entropy discussion was merely a way to talk about evaluating risk more generally, even though almost all other controls lack the kinds of hard data we have for password controls.
However, of greater concern is the guide’s description of the way risk is calculated. It starts with assigning an “out of context” impact rating to each control, which is a fairly novel and somewhat questionable approach as most impact ratings are derived from the asset, not the control (for example, see National Institute of Standards and Technology (NIST) Special Publication 800-60). But the problems continue by calculating the likelihood that such an impact would occur based on the organization’s maturity level for that particular control. The maturity levels proceed from the existence of policies to existence of procedures to implementation of those policies and procedures to the existence consistent measurements and finally to being actively managed. Again, this is not the typical way that likelihood is measured. For one, likelihood is based in part on the expertise and motivation of an attacker. Additionally, the password entropy discussion is the perfect example of how to begin evaluating likelihood. In such an example, the nature of the threat and the length and complexity of the password would make up the bulk of the likelihood calculation. Certainly, demonstrating consistency in application and the ability to measure that matters, but because a technical control like a password system is easy to implement and propagate across all systems in the organization, the existence of policies and procedures doesn’t seem to demonstrate much with respect to likelihood. For most controls, likelihood is influenced much more by the consistency and effectiveness of a control. For example, a firewall that limits ports to 80 and 443 may get a passing score in terms of restricting network traffic, but the reality is that such a restriction on its own means that the firewall is providing virtually no value because a significant number of applications use those two ports for nearly all access. That’s not to say that access controls elsewhere would not stop the attacker, but a control requiring a firewall or some sort of network-based control could potentially be scored as a high level of maturity even though the reduction in likelihood by the introduction of this control is almost non-existent. Now, HITRUST will likely argue that the assessors will be responsible for evaluating the effectiveness and appropriateness of the firewall rules in this example. But that does not dispel the concerns with the methodology as it renders it and the CSF almost useless by putting the real power in the hands of the assessor to make the right conclusion about the effectiveness of a control. Given cost pressures and an assessor’s desire for more business, this is hardly a desirable situation. While I ultimately come down on the side that experts armed with information produced by a community sharing best practices that produce far superior results over externally imposed compliance frameworks, the CSF’s reliance on these assessors for this crucial step seems to be a significant indictment of this very elaborate risk analysis process.
The Framework Fallacy and the Sampling Conundrum
As noted above, frameworks can be very helpful in developing cybersecurity programs and facilitating the selection of controls, but detailed, prescriptive frameworks are horrible mechanisms to operate a cybersecurity program on a daily basis. This is because frameworks encourage equal coverage of all controls, and while weighting mechanisms are used in the CSF, the scoring methodology still leans heavily to one of completeness. This goes against the reality of security, where a small number of controls contribute the most to overall security. In fact, the Australian Defense Services Directorate calculated that just four controls (rapid patching of workstations, rapid patching of servers, removal of administrator rights, and use of application whitelisting) prevent 85 percent of attacks. The SANS Institute has picked up on this fact in the development of its Top 20 Critical Security Controls. The model that CSF and other frameworks promote runs contrary to the concept of disproportionate weighting of controls. The inevitable result is that assessors never spend enough time evaluating the effectiveness of the controls that really matter.
Further exacerbating this challenge of covering all controls is the fact that it is impossible for any assessor to evaluate all controls for every device, business process, and job function. Consequently, it is necessary to devise a sampling methodology that is representative of the entire enterprise. While statistically valid sampling processes are possible, they are rarely implemented correctly. Instead, assessors tend to evaluate the information they are given. Because CSF is largely a paper-based exercise, third party assessors are reliant on the healthcare organization to produce evidence that is representative of the entire organization. Moreover, the degree of sampling and the amount of effort expended evaluating controls are the biggest drivers affecting what third-party assessors charge. And because organizations requesting these services rarely provide the level of specificity needed to properly scope these projects, it is up to the third-party assessors to determine the level of sampling and the amount of diligence to be expended. For example, my company bid on assessment work for a university several years ago. The school actually published online all the bids submitted. Those bids ranged from about $30,000 to over $500,000 for the same project. While we did not win that one, it is likely that a bid at the lower end did.
A Paper-based Exercise
Like many similar assessments based on framework, the CSF third-party assessor is largely reliant on documents provided by the customer to substantiate its controls. While customers generally don’t tell outright lies, the absence of hands-on technical testing (e.g., penetration testing, social engineering tests, vulnerability scanning, indication of compromise assessments), device configuration reviews, and direct observation of personnel performing their duties provide better insights into how and whether controls are being implemented consistently.
An Alternative
While there are a variety of ways to assess an enterprise’s cybersecurity, the goal should be to lower cybersecurity risk. That starts with defining controls that are appropriate for that organization and that are applied strategically based on the sensitivity and criticality of the assets in the organization. This means avoiding the one-size-fits-all approach. The third-party assessor’s role is to first attest to the fact that the organization is implementing a sound risk management approach that is documented and substantiated. But beyond that, assessors should not presume that a particular control must be in place in all situations. Additionally, organizations should demonstrate that they have dynamic risk management processes in place to respond to new and changing threats and are constantly aware of the consequences of lax security. Rather than rely on frameworks to audit their security, they should instead develop their own criteria that draws on specific controls that are narrowly targeted to achieve their goals based on industry experiences. Those could include things like the use of two-factor authentication for all administrative access, encrypting all removable media, implementation of anomaly detection technology for networks and applications, use of 24/7 security monitoring, and expanded use of network segmentation. While all these examples could probably be tied to a particular control in the CSF, the sheer number of controls and their breadth of the CSF and other frameworks make the process unwieldy, causing the organization to lose focus on what is most important. The reality of modern threats and very limited cybersecurity budgets mean that healthcare organizations have to pick and choose their battles. The CSF does not allow them to do that.