I started blogging on the "Robotics and the Law" blog hosted by The Center for Internet and Society (CIS) at Stanford Law School. The blog seeks to explore legal issues arising from the use of artificial intelligence and robotics. M. Ryan Calo, one of my fellow bloggers, runs the CIS Consumer Privacy Project. He is also Co-Chair of the Artificial Intelligence and Robotics Committee of the American Bar Association Section of Science & Technology Law.
What does this new blog have to do with data protection? I am guessing that many of my posts on the robotics law blog will not be of interest to you. Nonetheless, one area of cross-over interest comes to mind.
One of the scenarios we are discussing in the ABA AI and Robotics Committee is whether the hacking of a robot to make it do bad things could give rise to liability of the robot's manufacturer or the software developer for failing to prevent the hack. Ryan Calo cites this as an example of possible hacking a personal for the purpose of vandalism or harassment. Other possibilities easily come to mind, though. For instance, hacking robots could be a means to accomplish financial fraud, industrial espionage, stalking, or violations of privacy.
When it comes to hacking, in 2010, we normally think of hacking servers, personal computers, and mobile devices. In future decades, however, we will need to implement similar security controls for our robots. Robots are, after all, computers. In the short run, we will need to focus on secure design to protect robots from the kinds of attacks plaguing other forms of computers. Already, we have seen concerns about secure design raised about pacemakers and medical devices. In the coming age of personal and service robots, we will need to secure our robots as well.
Stephen Wu
Partner, Cooke Kobrick & Wu LLP