Same Exploits, New Target

I can’t wait for self-driving cars to hit the market. If there’s one thing I won’t want to waste my time doing it’s driving. Unfortunately public transportation is limited by destinations and times. Why should I schedule my entire day around the whims of a public transportation provider when I can have the best of both worlds?

But a lot of people don’t agree with me. The biggest argument I hear against self-driving cars is that they could make mistakes. This is an especially laughable argument since humans make mistakes while driving frequently enough to kill over 30,000 people per year in the United States alone. Another argument against self-driving cars is that hackers can more easily manipulate them into performing undesirable actions such as going to an incorrect destination or crashing. Seemingly supporting this argument is recent research that demonstrated how self-driving cars could be manipulated by feeding their sensors faulty data:

The multi-thousand-dollar laser ranging (lidar) systems that most self-driving cars rely on to sense obstacles can be hacked by a setup costing just $60, according to a security researcher.

“I can take echoes of a fake car and put them at any location I want,” says Jonathan Petit, Principal Scientist at Security Innovation, a software security company. “And I can do the same with a pedestrian or a wall.”

Using such a system, attackers could trick a self-driving car into thinking something is directly ahead of it, thus forcing it to slow down. Or they could overwhelm it with so many spurious signals that the car would not move at all for fear of hitting phantom obstacles.

This isn’t as damning as many people are making it sound. While the target has changed the exploit hasn’t. You can cause all sorts of havoc by feeding human drivers false data. Drivers have driving into bodies of water because their navigation software fed them incorrect data. Putting up fake road signs can manipulate people into taking wrong roads. Using an FM transmitter to broadcast a fake emergency message can cause all sorts of chaos.

Humans, like machines, use sensory input to make decisions. That sensory input can be exploited, which is how a lot of less likely to be lethal weapons work. Where machines differ is that they’re easier to update to protect against sensory exploitation. Sensory exploits on self-driving cars are likely correctable with software updates.

As machines continue to replace the need for human labor let us not forget that many of the weaknesses present in machines are also present in ourselves.