Science

Scientists Say Controlling A.I. Will Be Impossible for 3 Reasons

Getty Images / VCG

Holding artificial intelligence accountable for its actions is easier said than done, predicts a team of British researchers.

In a paper published this week in the journal Science Robotics, researchers Sandra Wachter, Brent Mittelstadt, and Luciano Floridi point out that policing robotics is extremely difficult. And as artificial intelligence becomes more widespread, it’s going to become a greater problem for society. There are three discrete reasons robots and A.I. are going to be hard to regulate: robots and artificial intelligences are extremely diverse in application, construction, and transparency.

Problem 1: Robots and A.I. are diverse.

The problem we face in policing A.I. is exemplified with the Random DarkNet Shopper, an A.I. built by a group of Swiss artists to buy random items from the darknet in late 2014. The artificial intelligence ended up purchasing ecstasy, a Hungarian passport, and with other counterfeit items. The Swiss police confiscated the robot and its purchases, but returned it (minus the drugs) and artists responsible for the project were not charged.

Here, although the robot had a specific purpose, the intention of its illegal actions was non-harmful and accidental, and so the police didn’t press charges. But it’s easy to imagine that someone could build an A.I. to do the same thing and have less honorable intentions with the results. This is a problem. “The inscrutability and the diversity of A.I. complicate the legal codification of rights, which, if too broad or narrow, can inadvertently hamper innovation or provide little meaningful protection,” write the researchers.

Problem 2: Transparency

So you decide to build your A.I. with a neural net so it learns better and faster. But doing so means that you won’t be able to say exactly why it does what it does. Right now, this is a successful strategy used to build A.I. that can do complex tasks like analyze images. However, its success makes policing the actions of A.I. even more difficult, because these systems are becoming more and more popular.

If you can’t see what the Random DarkNet Shopper is actually doing, determining that it’s harmless is nearly impossible.

Problem 3: Construction

“Concerns about fairness, transparency, interpretability, and accountability are equivalent, have the same genesis, and must be addressed together, regardless of the mix of hardware, software, and data involved,” argue the researchers.

We tend to think of robots and A.I. as separate entities. But as things like facial recognition software are used by robotic cops, the difference becomes less clear. If facial recognition is racist, we could build racist robo-cops, which means we have to regulate robots that use A.I. as well. And it becomes even more difficult when A.I. can build other A.I. , which Google demonstrated at the end of May.

In the end, the solution requires extremely precise regulations and ways to interpret black box systems that we don’t have yet. “The civil law resolution on robotics similarly struggles to define precise accountability mechanisms,” write the researchers. And as A.I. continues to spread, that’s a problem that will get worse.

Related Tags