Building an Internet of Things Risk Model in the Wake of Mirai


Posted on

For many of us, Internet of Things (IoT) security has been nothing more than a cocktail party conversation.  It sounds interesting but doesn’t necessarily affect our work or personal lives even if our job includes cybersecurity.  While it is clearly a concern for operators of medical devices or industrial control systems, it seems less relevant for the typical enterprise or consumer.  After all, even if we had a networked TV, refrigerator, or fitness tracker, the potential damage that could be done seems minimal.  However, some of the recent distributed denial of service (DDoS) attacks coordinated by the Mirai botnet may have changed that calculus.  While zombies and bots have been a fact of life for a long time, the scale we’re facing is an order of magnitude greater.  Because so little attention is paid to security in whole classes of IoT devices, attackers can conscript thousands of devices of a given product type just by knowing a single default password.  That’s a lot different than having to entice thousands or millions of users to click on a malicious link. 

In my presentation at RSAC 2015, I proposed a high-level risk model for the IoT where I explicitly highlighted the future aggregated risks and also noted that externalities may be a significant consideration.  In the Mirai botnet incident, we witnessed both of those issues.  The attack would not have worked if there weren’t a large number of identical devices sharing the same default password or if device owners had an incentive to implement any additional protections.  As is the case with nearly all externalities, the right stakeholders had no opportunity to weigh in.  That’s not to say that privacy issues and other risks that impact the device owner don’t exist.  As Craig Spiezle, CEO and Executive Director of Online Trust Alliance, notes in his recent RSAC post, those risks have been well documented and consensus solutions are available.  Moreover, the government has failed to play a productive role despite the obvious broad-based public interest in addressing a “tragedy of the commons” issue where market-based incentives alone are unlikely to work. 

However, even in the best of circumstances, government regulation is no panacea.  It moves slowly and tends to respond best to those who organize widespread public support, a task not easy for a technology still in its infancy.  Calling for the creation of a government agency to certify and enforce security for these products, as Bruce Schneier did recently, seems a bit premature and possibly overkill.  While such initiatives may be warranted to prevent the kinds of downstream attacks recently launched by the Mirai botnet, it should be up to the buyers to dictate the kinds of security needed for their specific use case, not the government.  Instead, the government’s best role is in promoting transparency and best practices that are applied when appropriate, but not dictated.  In essence, this bi-furcated approach is something akin to how we regulate the electric grid.  So where there is a potential for grid-wide disruptions along the lines of 2003’s Northeast Blackout or 2011’s Southwest Blackout, government regulation is appropriate to ensure any individual utility’s action or inaction does not cascade out of control.  However, for isolated segments, such as distribution-only utilities, federal regulations are lighter or non-existent and instead rely on the utility’s own self-interest and local regulation for keeping the power on for its customers.  That said, the guidance that the Department of Energy, the American Public Power Association, National Rural Electric Cooperative Association, and others provide for cybersecurity and general power engineering best practices is still of great value.  But it is provided with the understanding that one size doesn’t fit all. 

So with that in mind, I would suggest starting with the risk formula I proposed in my RSAC 2015 presentation:

The extrapolated risk from that can then be used to determine the kind of approach government, industry, and individual buyers of IoT should apply and would likely result in the follow approach.

  1. Where widespread damage to third parties is likely, the government should impose minimal regulations designed to limit the likelihood of these cascading impacts (e.g., the product shall require a customer assigned password upon initial setup)
  2. Where significant life safety issues are implicated by the product or product category, the government or industry group should set up a certification process where independent labs evaluate and certify the product based on established standards and where buyers in these regulated industries must purchase certified products or satisfy an exceptions process
  3. Government and industry groups should continue to provide best practices guidance for IoT with a heavy focus on individual use cases (e.g., drones and fitness trackers should have different requirements)
  4. Government and industry groups should promote transparency by:
    1. Encouraging and setting up a process for responsible vulnerability disclosure backed up with market-based incentives to encourage industry participation (e.g., public disclosure after a reasonable period of time with some exceptions)
    2. Encourage manufacturers to publish a list of all third-party libraries used by their products
    3. Develop procurement guideline and security checklists by use case for customers to use when making purchase decisions

Given that IoT is in its infancy and its trajectory is still uncertain, a light touch is certainly warranted.  However, for the technology to succeed in all its forms, customers as well as the public at large need to have enough confidence to not only open their wallets but the nation’s infrastructure as well, to the tremendous opportunities and risks that these solutions represent.

Internet of Things

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs