Close your eyes. Are they closed? Good. Now, imagine your body as a cloud. Think of its shape, the space that it occupies, and how it fits in with the objects around you.
The ability to imagine your own body is sometimes described as the sixth sense. The skill, which we develop as babies, is critical for coordinating all of your movements. Body awareness allows you to chop vegetables and not your fingers or sit on the couch to watch Ms. Marvel instead of hitting it.
Humans make bodily awareness look effortless, but robots do not. (Who can forget the clumsy Boston Dynamics robots?) If we want flexible, creative, stable machines that don’t fall down all the time, we need a robot that knows itself.
That robot is here.
Like a one-year-old experimenting with its facial expressions in front of a mirror, this little guy randomly moves its joints in front of five cameras and observes its final shape and position. After enough time, it begins to understand the space its body occupies. We can even see what it thinks it looks like: A cloud-like, yet largely accurate, self-image.
This model then predicts the robot’s best plan of action to accomplish simple tasks. In this case, it had to contort itself to touch a small red ball floating nearby, a task it had never attempted before.
As it moves, the robot employs another special ability that is a facet of body awareness: If one of its joints doesn’t respond in the way it expects, because of a broken motor, for instance, it notices and adjusts its strategy.
The results were published in Science Robotics on Wednesday, July 13.
This ability to learn new tasks on its own and adapt to changing bodily needs sets the little arm apart from more traditional robots that are explicitly programmed for one purpose.
Why it matters —If humans become aware of our bodies through experimentation and observation, can we program robots to develop a similar self-consciousness? And if yes, what could we do with that ability?
“It’s almost a taboo topic in robotics, to talk about consciousness,” says Hod Lipson, one of the study’s authors, but these are the questions that most interest him. Lipson is a mechanical engineer at Columbia University.
This robotic arm isn’t self-aware in the human, or even animal sense, Lipson says. But being able to learn on the go is wildly useful for humans that might use these robots.
“If you want [robots] to ever learn anything new, you either have to have another human expert to program them, which is very, very expensive. Or you just have to let them learn,” says Boyuan Chen, one of the study’s authors. Chen’s research was conducted at Columbia University as part of his Ph.D. He is now an engineer at Duke University.
While this robot still needs a guiding human hand, its ability to learn and adapt on its own saves engineers time and effort. This could allow them to build more complex and versatile robots in the future, the study authors say.
In turn, being able to adjust its body to meet the needs of the task at hand adds another layer of utility, Chen says. It could be especially useful in industry settings, where snapped cables or malfunctioning motors are commonplace, he adds.
How they did it — After about a day of training, and tens of thousands of data points, the robotic arm had a working model of its physical form. Because it created this model using visual feedback from five camera angles, we can see how the robot envisions itself as a 3-D cloud of points in space.
“It’s how the robot sees itself. And to me this is a big moment to see that,” says Lipson. “It's not perfectly accurate… but it’s good enough for the robot to move.”
Lipson’s team has been working toward this goal for years. They had previously trained the robotic arm to develop a model of itself, but only with prior knowledge programmed in. They still had to tell the robot beforehand what parts of its body to pay attention to.
Here, the robot developed its self-model from scratch before it was put to the test. The robot was instructed to touch the end of its arm to a red ball floating nearby while avoiding other obstacles. The ability to dodge obstacles was particularly remarkable and was only possible with this new method, Chen says.
Then the researchers tested the robot’s adaptability by cutting the cable to one of its motors or attaching another piece to the end of its arm. The robot would quickly notice that its body wasn’t in the shape it had predicted and compensate for the error with its working motors.
What’s next — The next step will be to use this technique on a more complex robot, says Lipson. We’re still a long way from creating robots that come close to the awareness of intelligent life, he warns.
The same principles used in this study can be applied to robots with more joints and limbs, perhaps ones that can move around a room. It will take a lot of work to get there, but these researchers’ techniques could make it possible to program very complex robots in an efficient and adaptable way.
This approach also takes up less digital memory. Think of it like the difference between memorizing a multiplication table and learning how to do that math yourself. The robot doesn’t need to store every possible solution to the problems it will encounter, because once it learns how its body works, it can solve each problem as they come.
The researchers also hope to eventually teach a robot to understand the form and function of other robots and work together as a team — though that reality may still be very far off.
“It’s like day one in a long journey,” Lipson says.