Science

The United States Needs a Federal Robotics Agency Before It's Too Late

It could prevent disaster.

A car factory with robotic elements putting segments together
Getty Images / Christopher Furlong

The United States has never had a greater need for a centralized body to offer guidance and expertise in the complex field of robotics and artificial intelligence. A.I. is getting baked into everything from online shopping experiences to Uber to autonomous driving technology. This is to say nothing of how automation is soon going to disrupt the jobs market.

“The next wave of economic dislocation won’t come from overseas,” said former President Barack Obama during his farewell address recently in Chicago. “It will come from the relentless pace of automation that makes many good, middle-class jobs obsolete.”

Obama didn’t propose an overarching federal agency to regulate industry against recklessly sliding into that dislocation, though. An outgoing president, it wouldn’t have meant much; instead, he called for a new social compact that guaranteed all kids an education to deal with the impending disruption. Meanwhile, new President Donald Trump wrapping up his first full week in office. It is unlikely he’ll do anything to establish a regulatory body that will make sure industry doesn’t run away with A.I. or robotics at the expense of the greater populous. The Washington Post floated the idea years ago and it’s only become more critical.

As it stands, regulations of robotics and A.I. are spread out at the federal and state level. Federal agencies like the FAA, SEC, and the National Highway Traffic Safety Administration all have some degree of issuing regulations regarding robotics and A.I., but the approach is largely scattershot and leaves inconsistencies, gaps, and suffers from a dearth of permanent expertise within the federal government. For instance, eight states and Washington, D.C. have laws on the books that govern the use of autonomous cars, but experts say the existing system doesn’t come even close to the rules that govern airplane safety, for instance.

John Frank Weaver, a lawyer in Boston, Massachusetts who specializes in A.I. law, thinks the time for centralized oversight is now.

“There should be some federal commission, or entity, that has some role in organizing a large picture regulation of A.I. and autonomous technology,” Weaver tells Inverse.

“The idea that there’s one body where congress and the executive branch are able to pool their resources and come up with a coherent federal policy for the country, both in terms of domestic policy and how we approach international treaties, I think is important, because of the potential dangers in a lot of areas.”

Those areas could be privacy implications from consumer drones, or safety on interstate highways filled with driverless cars. “There’s also what I call the public policy of ennui,” says Weaver. “I think the potential for great economic disruption because of this technology is there.” His go-to example is autonomous cars. The promise of self-driving vehicles goes beyond mere convenience for commuters who want to read a book on their way to work. One of the most common jobs for men without college degrees is long-haul trucking. A much-ballyhooed start-up called Otto, recently bought by Uber, wants to put self-driving tractor-trailers on the highways soon. Uber started testing autonomous cars in Pittsburgh in September and has begun mapping the streets of San Fracisco.

A Ford used by  Uber to test its autonomous technology.

Uber

“That could have serious economic and social ramifications,” Weaver says. “Are there public policy decisions we can make that will help those people and mitigate the negative effects that technology could have on their quality of life? A central body considering those questions hopefully would be well positioned to come up with some potential answers.”

Still, working groups within existing agencies don’t address the larger concern experts have. Ryan Calo, a professor at the University of Washington who advised the Obama administration on robotics oversight, has called for a federal robotics commission for years. Calo says that within the U.S. government there’s an increasing recognition that congress and the executive branch doesn’t have the “embedded expertise it needs to address these emerging challenges.” Quick fixes, like appointing a robotics expert for each agency, or relying on a network of outside advisors, is inefficient, he argues.

Even within a White House initiative to provide guidance on A.I., there was confusion initially. One of the things that was happening before the Office of Science and Technology Policy intervened, at the federal level, was that our discussion about robotics was completely siloed,” he says. “So the Federal Aviation Administration would think about certain things having to do with drones, and the Securities and Exchange Commission would think about things having to do with high-speed trading algorithms, and National Highway Transportation and Safety Administration would think about driverless cars, and so on and so on and so on, and never the twain shall meet.”

In Calo’s estimation, the robotics commission wouldn’t be a regulatory body per se, but would instead serve as a way to attract top talent to the government and provide a clearinghouse – an internal think tank of sorts – for officials to go to get advice not only about the state of the art technology, but what its impact on society could be. For Calo, there are key questions that need to be addressed at the federal level that so far have gone unanswered.

“How safe do driverless cars need to be before they get to be on the road? And when a company certifies that a car is that safe, what techniques does the government use to validate that certification?” Calo asks. “For a commercial plane, the FAA says that in order for you to have a commercial plane flying over the nation’s skies, you need to certify that each of your component parts will only fail in ten-to-the-minus-nine times.”

“There’s nothing like that for how well driverless cars need to perform before they’re on the nation’s highways,” he says.

President Donald Trump greets Wendell Weeks (R) of Corning, Elon Musk of SpaceX (L) and other other business leaders as he arrives for a meeting in the Roosevelt Room at the White House January 23.

Getty Images / Chip Somodevilla

Calo is skeptical that the Trump administration will take his advice, and it’s not hard to see why by glancing over his incoming team. Trump has nominated multiple cabinet members who seem more interested in dismantling the agencies they oversee than actually running them: see Ben Carson at Housing and Urban Development and Betsy DeVos at the Department of Education for two notable examples. It’s incredibly unlikely, then, that Trump will create any new federal oversight bodies no matter how necessary they might be.

One exception might be Tesla and SpaceX CEO Elon Musk, who has met with Trump several times before and after his inauguration. Musk is on Trump’s strategic policy forum along with Uber CEO Travis Kalanick.

Both Japan and the European Union recently created versions of the kind of robotics commissions Calo has advocated for in the United States. “These are major economic powerhouses,” he says.

One additional emerging concern that gets less attention that drones and driverless cars is copyright law and protection of intellectual property. Weaver says that A.I. has upended “a lot of the assumptions we have that are so basic we don’t even address them, they’re just baked in. The assumption is that there are human beings making decisions.” But with A.I., that’s not the case anymore. Whether you’re talking about Twitter bots that create satire, or news wire services that turn data into natural language stories, there’s no provision in federal law that governs those situations. “I think it’s highly questionable whether there’s any intellectual property associated with that,” says Weaver.

There’s a parallel concern as well, that gets to criminal liability if autonomous cars cause an accident. “So far, Google and the auto manufacturers that are using this technology have universally said, more or less, that if there is any liability created by these cars – because it’s our technology and there’s no driver – we’ll assume the liability,” says Weaver. “I don’t think that’s sustainable.”

Conservatives often balk at the idea of either increasing federal agencies or creating new regulations, but recent history shows the danger of markets and industries operating without any oversight.

Author Anat Admati has campaigned since the 2008 economic crash, warning about the continuing lack of adequate banking regulations, a topic she explores in The Bankers’ New Clothes, a book on the lack of regulation in banking.

Admati explains that when the issues are technical, industry experts are often ahead of the regulators and politicians who need to put in place and implement rules on behalf of the public. “People in the private sector naturally have their own private perspective, which is not always consistent with society’s best interests,” she tells Inverse. “It is important that policymakers rely on sufficient, un-conflicted expertise and make sure to set rules in a timely manner.”

“Otherwise, we may discover that risks have been ignored when it is too late and harm has occurred.”

When it comes to the economy, finding out something too late can be catastrophic. When it comes to tech like robotics and artificial intelligence, human oversight in the form a central government agency might just prevent disaster.

Related Tags