Science

Why Google DeepMind Just One-Upped Human Memory

Over time, the more important memories are cemented while the less important bits of information get overwritten.

Google DeepMind

DeepMind’s latest research paper is showing just how literal the association between brains and neural networks is starting to become. Inspired by research in neuroscience, Google’s engineers have created an A.I. that can retain its knowledge between tasks, turning raw memory into long-term experience that stays with the program, even as it moves on to other things.

This could potentially solve one big problem with human memory: It doesn’t really know what’s useful, or even important. If you build a house, in a year you’re far more likely to remember what song was playing during the construction of a wall, than something useful like the spacing you chose for that wall’s studs. But over time, having built many houses, the stereo playlist will start to fade, while the important stuff will remain.

Neuroscience has been slowly revealing the mechanism of this sort of long-term pruning of inconsequential information, finding that synaptic pathways that are more critical to a task receive a sort of protection from being overwritten in the future. Over time, the more important memories that are more often needed are cemented in the connections of the brain, while the less important bits of information get overwritten with new experiences which are then, themselves, protected to one extent or another.

Without an analogous system in place, even the smartest deep learning neural networks have trouble holding on to previous learning. It’s a process that computer scientists call catastrophic forgetting, and to illustrate the problem, DeepMind set its A.I.s to play a series of old Atari games and maximize their game score. The study’s control participant was an A.I. with no special new memory augmentation, and it performed as we would expect of an amnesiac: slow, low-level skill increases that are gained and lost over and over again. As the A.I. learns each new game, it essentially overwrites its old skills with new ones.

It’s possible, of course, to simply build an enormous neural network that can always assign new experiences to fresh, previously unused synapses, but that quickly creates a bloated, unwieldy system. Much better is the DeepMind approach: elastic weight consolidation.

This oddball term refers to their newly developed ability to assign weighted protections to synapses, making them more or less likely to be overwritten in the future. By assigning each synapse a different level of protection, they can allow new skills to be learned within the same network as previous skills while only moderately changing the preexisting structure. This lets neural networks retain the pruned-down old skills, their version of hardened experience, without ballooning the size of that network to the point of uselessness.

The red and brown lines show two AIs using DeepMind's approach. Blue is an AI without. The two strategies diverge when old games are replayed, and the unassisted AI has to start again from scratch.

In their study, deep learning neural networks practicing EWC learned a series of Atari Games without losing the entirety of their skill at the old ones. They could, in essence, learn several games together over the same span same time. Each time it came back around to its old standbys it was a bit rustier than when it had last stopped — but then, you would be too.

That’s the thing about neural networks: being inspired by brains, they could benefit from the same sorts of organizational strategies that have proved successful for evolution. That means the field of machine learning is starting to incorporate some real psychiatric plagiarism, interpreting cellular processes in neurons as pure functional achievements, and recreating those achievements as accurately as possible, in code.

Yes, deep learning neural networks are starting to be able to not just perform tasks for us, but shine a light on the nature of their own creators. Researchers are even beginning to turn that relationship around and use the behavior of neural network simulations of specific brain regions to learn about those regions in real brains.

The truly exciting possibilities begin there, when scientists can begin to really start designing out-there experiments to perform on human brains — because, for the most part, a real human brain won’t be necessary at all.

Study Abstract

“The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially.”

Related Tags