Innovation

OpenAI co-founder Elon Musk says secretive A.I. firm 'should be more open'

Musk, worried about the future of A.I., sounds the alarm.

FREDERIC J. BROWN/AFP/Getty Images

Elon Musk, a co-founder of artificial intelligence firm OpenAI, has suggested that the San Francisco-based company is not living up to its name.

OpenAI was founded as a non-profit in late 2015 with a mission to "advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return." But Musk, who stepped down from the board in February 2018 and now has "no control" over OpenAI, declared on Twitter Monday that the company "should be more open." The comments were a change of tone for Musk, who in August 2018 congratulated the firm on winning a human-versus-A.I. video game event.

"All orgs developing advanced AI should be regulated, including Tesla," Musk wrote on Monday, later calling for "both" national government and international regulation.

Musk was writing in response to an MIT Technology Review story published on Monday, which looks at how the organization has gradually altered its approach. While originally founded as a non-profit to ensure its research would benefit all, its focus subtly shifted in April 2018 with a new charter that anticipated "needing to marshal substantial resources," but would "always diligently act to minimize conflicts of interest." In March 2019 it set up a capped profit arm with a high limit on investor returns.

Four months later, in July 2019, the company announced a $1 billion investment from Microsoft. At the time, a Twitter user called "Smerity" asked whether the non-profit was quietly slipping toward a for-profit model, using the early goodwill as a springboard.

"Unfortunately, I must agree that these are reasonable concerns," Musk responded to the months-old post on Monday.

Musk's Twitter post.

Elon Musk/Twitter

Concerns around OpenAI don't stop at its finances. In February 2019 it announced GPT-2, a system that could generate new text. OpenAI refused to initially release the system out of fear it could be used for disinformation. The group was accused of trying to "capitalize off of panic" by Rutgers University associate professor Britt Paris in comments to MIT Tech Review. Over time, the openness-focused group has gradually focused on secrecy around its research.

Musk has repeatedly called for greater regulation, fearing that a super-smart A.I. could enslave humanity and governments could be too slow to respond. But while OpenAI seems like it may chime with these goals, in February 2019 Musk explained that part of the reason why he stepped down from the board was because "I didn’t agree with some of what OpenAI team wanted to do."

On Monday Musk seemed to intensify his criticism. He singled out Dario Amodei, a former Google researcher that now serves as a research director for OpenAI and handles their strategy. Under his leadership, teams have followed two strategies: one will develop A.I. systems by exploring a variety of approaches, and the other will look at how to make those systems safe.

"Confidence in Dario for safety is not high," Musk wrote.

Musk has previously donated significant resources to OpenAI. He donated $1 billion in 2015 for its founding. Documents in January 2019 revealed his Musk Foundation donated $10 million in 2016 to YC.org, an organization owned by OpenAI co-founder Sam Altman. Later that year, YC.org gave $10 million to OpenAI.

Musk's present-day efforts in A.I. appear to focus on Neuralink, which shares the same building as OpenAI. This new firm, which detailed its research in a July 2019 event, is aimed at developing a chip for human brains to directly interact with computers. It aims to test on a human patient with quadriplegia by the end of this year.

The long-term goal of Musk's new firm? To create a symbiotic relationship with artificial intelligence, ensuring that humanity can avoid being enslaved by super-smart machines.

Related Tags