Science

Could asking A.I. the wrong question lead to humanity’s downfall?

They're not trying to kill us, but they still might accidentally.

Unsplash / Andy Kelly

From Stephen Hawking to Elon Musk, many of the world’s top thinkers have expressed their fears about ann eventual eventual robot uprising. But according to Dr. Stuart Russell, A.I. researcher and computer scientist at UC Berkeley and author of the new book Human Compatible: Artificial Intelligence and the Problem of Control, the fear shouldn’t be that A.I. will disobey our commands, but instead that they might follow them too well — potentially causing us harm in the process.

“A machine pursuing an objective that isn’t the right one becomes, in effect, an enemy of the human race. An enemy that’s much more powerful than us,” Russell told BBC Radio 4 Monday.

To illustrate his point in the interview, Russelll imagined an A.I. whose job it was to solve climate change.

“For example… [imagine] we have a very intelligent climate control system at some point in the future and we want to return carbon dioxide levels to preindustrial concentrations so that our climate gets back into balance,” Russell says. “And the system figures out, well, the easiest way is to get rid of all the human beings.”

Russell continues to say that even if you were to restrict the A.I.’s possible solutions to only those that don’t kill any humans, it might instead suggest — or convince — humanity to have less children instead, which would effectively reach the same zero-human goal eventually.

What’s at stake here, as other theorists have explained before, is a set of shared values and objectives and a clear agreement of how to carry those out. But, as you might expect, achieving that is easier said than done.

Author Isaac Asimov set out to define a set of finite robotics laws in 1942 that included three, supposedly all-encompassing, guidelines:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

However, as Asimov himself showed in his writing, these laws aren’t necessarily foolproof. Instead, they require both the humans and the robots to agree upon a set of values, i.e. what constitutes harm and does one kind of harm ever trump another’s?

This catch-22 is illustrated in Russell’s climate change example because while climate change is causing harm to humans, so is eliminating humanity to fix it.

To remedy this cognitive dissonance, organizations like the IEEE Global Initiative have worked to bring together representatives from cultures all across the globe to create a 290-page report, called Ethically Aligned Design, that lays out cultural values and ethical goals for smart A.I. design with more nuance.

“It’s globally created by experts, and then globally crowdsourced by more experts to edit. It’s a resource,” director of the program, John Havens, told Inverse in March 2019. “It’s a syllabus. It’s sort of a seminal must-have syllabus in the algorithmic age.”

Related Tags