Experts Debate the ‘Threat’ of Autonomous Cars and Smart Robots


Posted on by David Needle

With the recent news that a Google self-driving car hit a bus because it's human assumed (incorrectly) that the bus would slow down for it, an RSA panel entitled Robot Cars, Risk and Ethics: Lessons for Artificial Intelligence was particularly timely.

“The poster boy for artificial intelligence is learning to drive a car, but I want to emphasize that A.I. has a large role in cyber security,” said moderator Kevin Kelly, “Senior Maverick” at Wired magazine. “We’re talking about cars, but really the same issues apply when we apply security to any large data sets.”  

Kelly started off by asking the panel of experts to imagine a scenario that the CEO of a driverless or autonomous car company might face. What do you do about reports the car’s A.I. will have to “decide” which people to kill.

It’s not that far-fetched. Basically, the issue is what happens when a driverless car is confronted with the choice of running over a person it doesn’t have time to brake for or swerving away and likely killing five people on the sidewalk?

The intuitive answer might be to choose killing the person in front rather than swerve and kill an entire group of people. But it’s not that simple.

First of all, the CEO might ask whether the car’s A.I. designers have to solve a plan for a scenario that may never happen. But Patrick Lin, Director of Ethics + Emerging Sciences Group at California Polytechnic State University says such planning is essential.

“You have to isolate your design principles and test them,” said Lin. “Is the car going to be designed to always obey the law or always protect the user? You have to examine how that would work, test it and see if it breaks.”

Another panelist, Steven Wu, an attorney with the Silicon Valley Law Group, said it’s not just cars we have to worry about. “Robot surgery, personal service robots, and drones will be out there. The interaction between humans and machines is going to be the big issue of the next decade.”

The panel didn’t come to any consensus on how or even if an autonomous car could be designed to choose between killing one or many people. One panelist noted Google has essentially punted on what’s referred to as “the hand off’ issue, for now. Hand off refers to getting the car, and theoretically the car manufacturers, off the hook by turning control in an emergency back to the human driver.

But Jerry Kaplan, a Visiting Lecturer in Computer Science at Stanford, said the hand-off idea is “one of the silliest things I’ve ever heard. Once you have autonomous car, you’ll be in the back seat napping. The car makers want to hand off the problem, but that’s a bad solution.”

He said the answer is better design.

In fact, Kaplan is bullish on the idea of autonomous cars that he said could add as much as one to two trillion dollars a year to the U.S. economy. “That’s not small potatoes, but we need to consider the social implications.” (Kaplan is also author of the book Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence). 

Contributors

artificial intelligence & machine learning Internet of Things

Blogs posted to the RSAConference.com website are intended for educational purposes only and do not replace independent professional judgment. Statements of fact and opinions expressed are those of the blog author individually and, unless expressly stated to the contrary, are not the opinion or position of RSA Conference™, or any other co-sponsors. RSA Conference does not endorse or approve, and assumes no responsibility for, the content, accuracy or completeness of the information presented in this blog.


Share With Your Community

Related Blogs