Science

Before Skynet, We Need to Worry About Robot Hackers

Getty Images / Carl Court

We all want a robot butler, but what if that faithful mechanical servant could be turned into a live-in spy, or even a robotic assailant? With the Internet of Things rapidly creating astonishingly wide security holes where none existed before, researchers are sounding the alarm that robots could make our current problems seem cute by comparison.

A study titled “Hacking Robots Before Skynet,” by Seattle-based IOActive Labs reports that security vulnerabilities run the gamut of manufacturers and distributors. You might not have heard of companies like Ubtech Robotics or Universal Robots, but these are the names that make up most of the nascent non-industrial robot business, and their ‘bots aren’t immune to cyberthieves.

These robots were found to be vulnerable to all the cyber attacks you’d expect from a new industry pushing always-online software. Some of the threats are mundane — “Oh, my new robot might listen to me? Wake me up when it’s more dangerous than my thermostat” — but others are thoroughly new. The IOActive report determined that some vulnerabilities would be great enough to allow it to cause “physical harm” to those around it.

But, at the end of the day, hacks are hacks, and the very first old lady knocked down by a hacked robot will bring strenuous security regulations down on the industry itself. Robots, like every other piece of complex connected tech, will always be hackable, but the currently abysmal level of security found on most connected devices simply cannot continue for long — and so, it most likely won’t.

More worrying in the long term is the idea of robot manipulation that doesn’t so much hack a robot, as screw with its delicate little brain. Imagine a self-driving car that, unbeknownst to its passengers, no longer knows how to drive.

The “Adversarial Inputs”

The process was actually invented by A.I. researchers, and it sees wide use today to help train neural networks. Called “adversarial inputs,” these confusing stimuli present difficult imagery to the A.I., and researchers record how the A.I. reacts. It’s used primarily as a means to teach the A.I. how to get find the patterns it needs to find even when they’re obscured, but the power of these deliberately difficult examples to trip up even advanced A.I.s is troubling, when we imagine those robots having a physical presence in a room.

In one example from this paper, a picture of a panda (correctly identified by the A.I.) and a picture of static were blended together to trick the image recognition algorithm. Looking at the two together, the A.I. saw a gibbon, and claimed to have very high confidence in the prediction.

Adversarial inputs are used specifically to train A.I.s to be better at dealing with unforeseen problems, and while they’re very useful when used as intended, such robotic confusion could spell disaster.

These are the sorts of misidentifications that could lead a self-driving car to see a crosswalk as an open pit, or a robot butler to see a human head as a volleyball. Stationary, highly specialized robots have killed while assembling cars in multiple countries, so we have to assume that mobile, autonomous or semi-autonomous robots will be even more capable of reaching a lethal level of confusion, or malfunction.

Most people will probably never run afoul of any hacker-assassins trying to make a helpful robot go cuckoo with malicious inputs. Accidental examples are hard to come by, since it’s not just a matter of making an algorithm fail — you have to make it fail so that the algorithm thinks that it’s succeeded.

The idea of rogue robots is not new, but it’s also no longer theoretical. This sort of adversarial artificial-intellectual stress testing will be vitally important, going forward, but it will have to come along with some real reform of the oversight governing the release of products designed to integrate this deeply into our lives.

Related Tags