Among critical infrastructure asset owners, a common device for ensuring that their cybersecurity risk posture is appropriate is an audit. We'll leave aside whether the motivation is compliance or simply a desire to be as secure as possible against attacks. In essence, both motivations often lead to the disaster that is the audit whether it is driven by "best practices" or a particular compliance framework. The problem with typical audits is that they favor objectivity and ease of data collection over usefulness. After all, why hunt for systemic security risks in a business process when you can launch a vulnerability scanner that produces thousands of findings? Of course, the scanning process might be helpful if it offered much context. Simply knowing that some computer somewhere deep in the enterprise is missing an operating system patch does little more than create a fire drill for the patching team. That said, I would distinguish between routine scanning and that produced during an audit. Routine scanning, if done on at least a weekly basis, can produce useful trending data that can help pinpoint business process weaknesses and poorly performing information technology staff. Through that mechanism, one can also know where there are the greatest risks based on a particular kind of attack.  This helps the security operations folks better address ongoing attacks. 

 By contrast, a useful audit will examine the organization's ability to obtain data on a regular basis on potential weaknesses rather than reporting the obvious: that such weaknesses exist.   Such an approach begins by looking at process rather than results.  For example, usually the right question to ask is, How are configuration changes performed? rather than, What is the current configuration?  However, it being an audit, some sampling should occur to confirm that people are doing what they say they're doing.  If the auditor is told that firewall administrators always include a comment explaining why a rule change was made, then some simple spot checks would suffice.  On the other hand, if the auditor is told that such comments are made inconsistently, then there is no point noting all the places where comments are missing.  The key problem is the process, and understanding why the process is not executed consistently is more helpful in improving security than simply focusing on the outcome.  Having a big long checklist filled with easy-to-fix findings may give managers the feeling that the audit was worth it, but it drives the technology folks nuts, particularly when the audit tells them what they already know.  What they want instead is ammunition to use with management to obtain more resources such as additional tools, additional or different staff, and more training.  What management needs are ways to judge whether the right business processes are deployed with the right people behind them.  Telling management that six application patches are missing from 254 workstations is of little value.

Critics of my suggested approach would say that I'm really talking about consulting services and business process re-engineering rather than auditing.  It's a fair criticism but not entirely accurate.  A good audit gets to the root of the problem.  It may not argue that someone should be fired, but it does note where someone lacks the skills to do a job.  Moreover, the auditor may not be the one that redesigns a process, but he/she is the one to note where a process is broken.  For too long, we've relied on individuals too interested in unearthing all that is broken while caring little of why it is broken.  Penetration testing, vulnerability scans, and configuration checks have their place, but they are ultimately a very small part about improving security.  The sooner we realize that, the sooner we can start solving problems and stop playing Whack-A-Mole.