Science

The future of A.I. is human-like tech and tech-like humans

Neuromorphic computing and Jedi mind tricks could create more computer-human symbiosis.

VichanChairat / Getty

These days it seems that nearly every product and startup boasts some kind of A.I. capability, but when it comes to advancing this domain beyond simplistic machine learning technologists at MIT Technology Review’s Future Compute conference say these A.I. will need to be more human than not.

When discussing A.I. during the conference’s first day on December 2nd, speakers focused on two distinct paths for this technology: more human-like A.I.’s as well as more computer-like humans. This dual approach was presented as a potential future for human-machine symbiosis.

But what exactly does that all mean, and is it even a good thing?

A research Scientist from Oak Ridge National Laboratory, Catherine Schuman began the conversation by presenting her work on neuromorphic computing. Schuman said that this approach, which literally means brain-shaped computing, is different from a more traditional neural hardware approach in several important ways.

“Neural network hardware systems, like the Google [Tensor Processing Unit], are systems that accelerate traditional neural network computation… [like] deep learning” said Schuman on stage during her talk. “And they’re well suited for today’s artificial intelligence [but] neuromorphic computing is a little bit different. Neuromorphic computing systems take more inspiration from how the brain works… they implement a spiking recurrent neural network model… [and] we can actually do neuroscience style simulations on neuromorphic systems.”

With this more native intelligent approach, Schuman says she believes that neuromorphic computing will be what pushes the industry forward from today’s A.I. to the future.

Neuromorphic systems are still very far from being commercialized, but Schuman said that they’ll borrow heavily from architecture already found in human brains, such as massively parallel neurons and synapses as well as co-located processing and memory units.

While neural hardware as we know it today is programmed and trained to learn specific tasks, a neuromorphic computer would, in theory, learn much more like a real human. This could lead to faster and more energy-efficient ways of doing such computing.

Not to be done outdone by our computer counterparts, humans are also increasing their technical abilities too, as CEO Emeritus of CTRL-Labs, Thomas Reardon, demonstrated later on the same day of the conference.

CTRL-Labs, which was recently sold to Facebook Reality Labs, focuses on exploring how humans can get better manipulating our machines — rather than having our machines manipulate us.

“There’s no interaction you have with a machine today that doesn’t involve you moving,” said Reardon on stage. “That includes speech, which is just moving your mouth in a sophisticated manner. So, what we asked our selves is ‘How do we escape this world where we’re constantly trying to make our devices more capable and instead start to become more capable ourselves?’.”

As Reardon demonstrated on stage, this is primarily in the form of wearables that are able to access and decode a user’s electrical input rather than their muscle input. In other words, it separates neural signals (like wanting to swipe your finger) from physical actions (actually swiping your finger) which enables the tech to interpret that signal in a way that is more technological native, such as having a screen be swiped without you physically touching it.

Reardon himself admitted that the technology is not dissimilar to how a Jedi might use the force, but emphasized that the purpose of this approach was to create a deeper symbiotic human-computer relationship that would enable more natural and “joyful” use of machines. Not to mention far more accessability for users who are unable to fully utilize physical interfaces.

Now, whenever technology gets too close to our brains — whether it be to model them or to access their electrical signals — people begin to get spooked. And for good reason.

While the rise of intelligent machines very much depends on how we treat and program them, the potential exploitation of neural brain signals — especially in the hands of Facebook — appears to be a much more realistic threat. But Reardon assured the audience that their model delt only with output, not input, and that the risks as a result were much less dystopian than a critic might think.

Instead, Reardon said the technology would primarily be used to create “mixed reality” experiences but did not comment on exactly what these experiences might look like.

Related Tags