Science

Elon Musk’s OpenAI Can Be Muscle for Safe Artificial Intelligence Research

A billion dollars can disrupt the R&D process, but not all open sources are created equal.

Justin Sullivan / Getty Images

Over the weekend, Elon Musk, Sam Altman, and other Silicon Valley bigwigs unexpectedly announced the launch of OpenAI, a nonprofit company that describes its goal as: “to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

“Since our research is free from financial obligations, we can better focus on a positive human impact. We believe A.I. should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

There are two major takeaways from this announcement. The first is that Musk and company see A.I. research right now as too narrowly focused on achieving short-term practical value, and they want to help push forward endeavors that are taking a more broad, comprehensive approach to developing A.I. systems. The second, more veiled takeaway is that Musk is still concerned about A.I.’s potential to do harm if we develop and introduce half-baked systems. Musk has previously called A.I. “our greatest existential threat” and suggested that regulatory oversight might be necessary to prevent potential dangers. OpenAI could be a way of providing that oversight without the need for a federal watchdog.

What are Musk and his colleagues doing that’s different from what others have done. After all, open source A.I. is nothing new. Google made their own A.I. software open source last month and this has become fairly standard procedure. But with an endowment of $1 billion, OpenAI is already one of the biggest organizations working on A.I. research, and quite easily the biggest nonprofit. Its goal is to make its results publicly available and all its patents royalty-free. They have a carrot and a stick and they intend to use it to steer an industry.

The idea is to use OpenAI as a kind of counterweight to larger corporations, private research labs, and even world governments that have incredible amounts of money and resources to pour into their own A.I. research that may not be devoted to friendly purposes, and could instead lead to technologies with nefarious applications. Researchers working with OpenAI will basically pursue ‘friendly’ A.I., without the pressure to show immediate results. There can be a bigger consideration for safety and design features that private businesses might be inclined to ignore. There won’t be a white tower where only a select few control the path of A.I. development. It’s going to be a welcome world to everyone.

In an interview with Backchannel, Musk also emphasizes a desire to make a wide variety of A.I. systems. “We want A.I. to be widespread,” he says. “We think probably many is good.”

That’s important, because it segues into perhaps the primary reason Musk and Altman want to democratize A.I. They believe the best kind of A.I. — the kind that will benefit humanity in a positive way without creating robot overlords — is the kind that acts as “an extension of individual human will,” as Musk puts it. He goes on to say:

“As in an A.I. extension of yourself, such that each person is essentially symbiotic with AI as opposed to the AI being a large central intelligence that’s kind of an other. If you think about how you use, say, applications on the internet, you’ve got your email and you’ve got the social media and with apps on your phones — they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that. And we’ve found a number of like-minded engineers and researchers in the AI field who feel similarly.”

Basically, the OpenAI vision for A.I. research is a decentralized community that helps to connect human beings to one-another and to the world itself.

Musk hasn’t yet outlined how they would prevent research within OpenAI itself — or repurposed by outside groups — from going in a dangerous direction. And there’s something else worthy of note here: Open source systems only make sense if the data used to build machine-learning algorithms is freely shared. An intelligent system has to learn how to be intelligent, and for it do that, it needs to learn from data and experience. It’s like building a dam — you need both the blueprints (the algorithm) and the building materials (the data) to actually make something that will stop water.

That’s why people are still limited in what they can do with Google’s TensorFlow — they don’t yet have the data Google has. And unless the collaborations built out of OpenAI allow free access to the same data the in-house researchers are working with, Musk’s vision won’t necessarily be realized — or at least not in the way he’s promoting it.

Musk, Altman, and the other founders of OpenAI have begun something that will have tremendous influence on the growth of A.I. from here on out. At the same time, they will need to be forthright in their commitment to fostering an open source research community, which means being public about their plans for the nonprofit company, and being willing to share data. Otherwise, OpenAI won’t be much different from the flock companies it was designed to herd.

Related Tags