Science

A.I. That Can See Around Corners is Just Around the Corner

MIT researchers have made "an AI for your blindspot."

Flickr / Garrette

The age-old horror movie trope of rounding the corner and running face-first into trouble might soon become antiquated, once smartphones are able to see around corners. Researchers at the Massachusetts Institute of Technology have created a machine learning-enabled system designed to analyze light reflections and virtually “see” around corners, a development that could soon fit inside a phone.

Developers at the MIT’s Computer Science and Artificial Intelligence Lab announced on Monday what they describe as “an AI for your blindspot.” Their work — the paper “Turning Corners into Cameras: Principles and Methods” — will be presented later this month at the International Conference on Computer Vision conference in Venice, Italy.

The imaging system analyzes light reflections in the space it is “shown” in order to detect if there’s a person or object around the bend. In addition to being able to identify if something is there, the system can also estimate its speed and trajectory, if it’s in motion.

These “corner cameras,” as researchers nicknamed the setup, work by analyzing light reflections. Or, more specifically, the “penumbra,” which refers to the fuzzy outer region of the shadow that results when light is reflected off an opaque object onto a flat surface (in this case, the ground).

What the penumbra is, in pictures.

MIT CSAIL

“Even though those objects aren’t actually visible to the camera, we can look at how their movements affect the penumbra to determine where they are and where they’re going,” says Katherine Bouman, lead author on a new paper about the system. “In this way, we show that walls and other obstructions with edges can be exploited as naturally-occurring ‘cameras’ that reveal the hidden scenes beyond them.”

What makes this approach so novel is that it doesn’t require special lasers, unlike other means of seeing things that are outside the line of human sight.

This makes it both less expensive than those systems and more effective in situations where ambient light is present. In fact, CornerCameras can even work in the rain.

While this breakthrough is significant, there are still some challenges to be overcome before the system can work with a camera phone. Right now, video captured by a smartphone camera can be used for this kind of hidden obstacle analysis — but it has to be run through a laptop first. More fine-tuning is necessary to improve the system’s ability to work in low-light situations or situations where light conditions change unpredictably, like a particularly cloudy day outside. Once this is worked out, there will be way more practical implementations for the CornerCamera system.

Next, the tech will be tested on wheelchairs to see how well the cameras work while in motion. Eventually, the researchers hope cars can be outfitted with the capability this system allows for to cut down on the all-too-common occurrence of blindspot accidents.

“If a little kid darts into the street, a driver might not be able to react in time,” Bouman said. “While we’re not there yet, a technology like this could one day be used to give drivers a few seconds of warning time and help in a lot of life-or-death situations.”

Abstract:

We show that walls, and other obstructions with edges, can be exploited as naturally-occurring “cameras” that reveal the hidden scenes beyond them. In particular, we demonstrate methods for using the subtle spatio-temporal radiance variations that arise on the ground at the base of a wall’s edge to construct a one-dimensional video of the hidden scene behind the wall. The resulting technique can be used for a variety of applications in diverse physical settings. From standard RGB video recordings, we use edge cameras to recover 1-D videos that reveal the number and trajectories of people moving in an occluded scene. We further show that adjacent wall edges, such as those that arise in the case of an open doorway, yield a stereo camera from which the 2-D location of hidden, moving objects can be recovered. We demonstrate our technique in a number of indoor and outdoor environments involving varied floor surfaces and illumination conditions.

A.I. can now look two seconds into the future. Check out this video to find out how.

Related Tags