We're all familiar with the attacker versus defender dynamic, and how it plays out culturally in the security industry -- just say the word "cyber" and see who winces, for example. But it all used to stay "in the family," where red and blue team activities were confined to security professionals, either within security vendor companies or within organizations that had their own security staff.
The dynamic is bleeding out now into the wider world, as security vulnerabilities are being identified and publicized in the products of those who are not security insiders, and indeed, who have never had to think before about software and interconnectedness. People who are new to security are not steeped in the culture, and are not privy to the "conventional wisdom" that keeps (white hat) attackers and defenders from escalating hostilities.
The first problem is distinguishing white hat researchers from black hat ones. Can you describe the differences well enough to codify them in legislation that would be fairer than the current Computer Fraud and Abuse Act? If you can't, then how can we expect the rest of the world to figure it out?
- "Black hat researchers look for vulnerabilities in someone else's system without permission." Nope, that won't work. That covers too many of the rockstar presentations at conferences these days.
- "Black hat researchers sell the vulnerabilities that they find to third parties for a profit." Won't work either, especially as people develop models on economics and markets for vulnerabilities.
- "Black hat researchers extort money from organizations by threatening to expose or publicly exploit their vulnerabilities." With all the public pressure for organizations to create bug bounty programs, this doesn't look viable to me any more either.
Even white hat researchers, by and large, are looking at the issue in too one-sided a fashion, and are getting bitten. The term "responsible disclosure" started out as a good idea, but it's been abused so much that we might just need a different term going forward -- one that describes the rights and obligations of both sides without built-in assumptions.
For example: when may an organization take legal action to defend itself against a researcher?
If you said "never," go sit back down. Nobody gets blanket immunity, ever. The same goes for those who said "always." The answer is not binary, but rather a spectrum: when is it reasonable for an organization to take legal action? Determining these societal rules should be done in collaboration by both sides. And ideally, each side should come to the table with a proposal that takes into account the needs and perspectives of the other side. The discussions are already taking place, such as with the NTIA-facilitated multi-stakeholder meetings, but the agenda cannot simply be either "How do we get vendors to stop suing us?" or "How do we get researchers to stop finding our vulnerabilities?" It has to be a real commitment to understanding each other before setting other goals.
I know this is difficult. But security researchers -- especially those who still maintain the hacker love of self-determination -- will never accept having rules dictated from the outside. So the better response is to have security researchers offer to regulate themselves, and assume an equal share of the risk as well as the responsibility. Companies coming out with insecure software need to do the same on their side, but they're more used to the threat of having regulations imposed on them, particularly for safety. It's security researchers who need to adjust to this brave new world.
The alternative is one that we can't risk. Uncontrolled vulnerability research, and one-sided blaming and shaming, will result in a backlash against the security community. As soon as someone is seriously injured or killed in a demonstration of a vulnerability, the current simmering conflict will escalate to a rolling boil. We need to head this off before we all get burned.