One big problem in security is the tendency to think in binary terms. Either you’re breached or you’re not; either you’re secure or you’re not. But this black-and-white worldview can be the cause of both technology and people problems.
Is it possible to be a “little bit pwned”? Most would say no, although if you’re used to dealing with a wide spectrum of incidents within an organization, you know that there’s certainly a grey area. If all the evidence points to the presence of an external attacker and confidential data has been known to leave the servers, then that’s a clear breach. If two regions are arguing about jurisdiction over a server and one changes the root password and locks the other one out, that’s more of a business judgment call. And sometimes when you’re investigating something weird and it doesn’t pan out, you may never know for sure whether it was really a breach or just a blip.
Another problem is that the business leaders probably don’t believe in 100% security any more than you do. They want to find a level that they feel is “secure enough,” and in order to do that, they need to first know “how secure we are.” Unfortunately, this is an area where security professionals usually don’t have a good answer, unless they’re wizards at quantitative risk analysis.
“How secure are we?” can’t be answered with “five,” or “red,” or “here’s the audit report.” The answer will always be a variation on, “It depends. What do you want to protect against, and how sure do you want to be?”
Policy is an area that clearly resists a binary state. For every security policy or configuration setting, there is an equal and opposite exception. CISOs spend a lot of time reviewing, granting and tracking exceptions to policy, whether it’s “we can’t patch that until we upgrade the database in June” or “the CEO is never going to allow an agent on her iPhone.” And every security technology needs to take that into account: a product needs a place, ideally as close to the configuration details as possible, where the administrator can make notes on exceptions. Governance, risk and compliance are all about tracking exceptions (and explaining them to auditors).
In the same vein, security products – particularly those that provide enforcement, such as blocking or session termination – need to provide a way for the customer to do what I call incremental policy implementation. Many organizations are afraid to put something in-line or turn the blocking on, because they don’t know what will break as a result. (This is one reason why web application firewalls can be deployed and turned on, but not actually be set to stop anything.)
Here’s what it would look like: let’s say that you spot an unusual event or activity on a system. You’re not sure what it is, and you’re not sure whether it’s good or bad. Should you try blocking it once, just to see what happens? What if you know it’s one user misbehaving – could you block it once to see whether he tries it again? If so, then you could block it permanently, but maybe just for him. Or you want to try rolling out a new rule, but only on a few beta testers. One of them has a problem, so you want to roll back the rule in that one case but keep it for the others. There’s a lot of testing and jiggering that goes on when you deploy a new security control, and the technology needs to make that as easy as possible.
Flexibility in security doesn’t just mean the ability to integrate with a lot of other tools, or adjustable speed and scale. It also means smashing the binary, and letting users go back to turning knobs, whether it’s to 1 or 11. Security will work much better within the real world when it moves beyond “on” and “off.”