Science

Waymo's Huge Self-Driving Car Dataset Could Help Researchers Toward Level 5

The Google autonomous car project is releasing one of the biggest datasets ever.

Waymo

Waymo just gave the autonomous car sphere a big boost in its efforts to reach full hands-free driving, releasing Wednesday a treasure trove of data covering several hours of sensor information from a variety of scenarios.

“We are happy to announce and open up our dataset, which is one of the largest and most diverse autonomous driving datasets that was ever released,” Drago Anguelov, principal scientist and head of Waymo Research, said during a press conference call to detail the launch.

The wealth of data is aimed at helping researchers in non-commercial applications, enabling them to tackle some of the big questions around how to make computers take over from humans and drive cars in any situation. This holy grail of autonomous driving is known as level five, the endpoint of a six-point scale that covers full human control (level zero) and human-monitored limited assistance like Tesla Autopilot (level two). It’s a tall order — Waymo CEO John Krafcik said in November 2018 that autonomous cars “will always have some constraints” — but the industry and researchers are working to reduce the edge cases and solve more of the problems human drivers manage every day.

Waymo, which sits alongside Google under the Alphabet umbrella company, is one of the most high-profile firms working to get closer to level five. The firm has covered over 10 million miles in test vehicles and even rolled out a limited taxi service in Arizona with a safety engineer ready to take over. Data is king in autonomous driving, as it can help teach an artificial intelligence about how to react in new situations based on past experience. Although Waymo’s new dataset covers a thin sliver of these trips, it’s much bigger than a number of previously-available open datasets.

“The dataset that we released was based on feedback from contacts we have in academia and inside Alphabet, and this is the data that they found exciting to work on,” Anguelov said in response to a question from Inverse about the response from researchers.

The camera footage in action.

Waymo

Waymo Open Dataset: What It Contains

The dataset, available for free at the firm’s website, covers 1,000 driving segments. Each segment covers 20 seconds of driving. It collects data from five depth-sensing lidar sensors and five cameras on the front and sides. The collection provides a complete, 360-degree view of what the car saw during the drive and how it interpreted the view. With each sensor operating at 10 Hz, that comes to 200,000 frames.

This fusion of sensors is key to Waymo’s success in the space. Chief technology officer Dmitri Dolgov explained to Inverse in June that the combination of cameras and lidars helps Waymo tackle dust storms in Phoenix during its test runs, ensuring it doesn’t depend on any one sensor.

It’s a broad spectrum of data, covering day and night, urban and suburban, and sun and rain. The footage covers four locations: San Francisco in California, Mountain View in California, Kirkland in Washington and Phoenix in Arizona. Around 50 percent of the data is from San Francisco, where the sensor-adorned setup has become a common sight.

Pedestrians labeled by the system.

Waymo

Beyond simple camera footage, the set also includes labels of what the car saw and how it interpreted the data. Vehicles, signs, pedestrians and cyclists are all labeled. The set covers 12 million 3D labels and 1.2 million 2D labels.

“We intend to make even more data available,” Anguelov explained. Future plans involve publishing benchmarks on key problems in the space, organizing competitions, and adding more data based on community feedback. “This is just the first cut.”

Prior to this set, one of the most popular sets to work with was the KITTI Vision Benchmark Suite produced by Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago. The set launched in 2012 though, and Waymo claims the data is not as good.

KITTI covers just one lidar and four cameras, two of which are in greyscale. It has 15,000 90-degree labeled frames, and offers 80,000 labeled 3D boxes plus 80,000 labeled 2D boxes. Waymo also argues that the lidar-to-camera synchronization is not as good as it is on its own dataset.

Waymo Open Dataset: The Race to Build Level 5

So why now? With Tesla racing to build its own computer vision system, and even the likes of Apple exploring the space, why is Waymo opening up its sets now?

It’s important to note that the data is only intended for non-commercial purposes, and the license agreement would forbid an automaker from using it in their commercial vehicles.

“We felt…not just us, I think several different companies felt at roughly the same time that the field was currently hampered by the lack of suitable datasets,” Anguelov said. “We decided to contribute our part to ultimately make researchers in academia ask the right questions. And for that, they need the right data.”

A Waymo van.

Waymo

However, Anguelov was keen to dispel the notion that Waymo is giving up on its in-house autonomous capabilities. It’s a difficult problem to solve: British chip expert Arm previously told Inverse they expect level five to reach consumers in around 2027. One Toyota official claimed in 2017 that geographically-restricted level four could launch by 2020, but full-blown level five would be much further away. Waymo CEO John Krafcik said in November 2018 that it’ll be decades before self-driving cars are ubiquitous.

“It is not an admission in any way that we have problems solving these issues,” Anguelov said. “But there’s always room for improvement in terms of efficiency, scalability, amount of labels you need.”

As Waymo looks to expand its tests and trial runs, its dataset launch could help tackle some of the big questions around how to take the human out of the driver’s seat.

Related Tags