Science

MIT Says A.I. Not Smart Enough Yet at Cybersecurity, Still Needs Humans

Turns out A.I. won't take all of our jobs. Yet.

Getty

The Massachusetts Institute of Technology has good news if you’re in the security industry: A.I. robots might not take your job.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) released a paper last week that describes an enhanced cybersecurity system called “A.I.²” It’s an “analyst-in-the-loop system” that uses A.I. to comb through massive amounts of data and human analysts to provide feedback.

A.I.² combines the two types of cybersecurity systems currently in use: Analyst-driven (humans trying to identify and respond to attacks), and unsupervised machine learning-driven (A.I. using patterns to predict and detect attacks). Both systems have their downsides. Humans tend to miss a lot of cyber attacks because of overwhelming data, and A.I. tends to put out a lot of false alarms because patterns aren’t always predictive.

Combining a human’s strength in identifying true threats, and A.I.’s strength in processing mass amounts of data, results in a stronger security system. Also, humans get to keep their cybersecurity jobs.

Reverting back to human work might sound like something researchers at an artificial intelligence lab would be trying to prevent. But MIT’s researchers claim that using people and A.I. together leads to a detections rate of 86.8 percent — 10 times better than the solo-A.I. rate of 7.9 percent — and does it cheaper to boot.

A.I.² is made up of four components. First, a computer gathers big data. The data are processed, and outliers are pulled out using already existing A.I. technology. Then, the A.I. pulls anything that might be “malicious” and sends it to a human analyst. Finally, the analyst sends feedback to the A.I., which learns from the information and gets better at deciphering if an attack is malicious or normal.

A real world data set of 3.6 billion log lines verified that A.I. and humans performed better as a team than as separate entities.

Overall, A.I.² sounds more like a middle step between developing technology and complete autonomy. The A.I. will eventually learn enough from their human co-workers that the student becomes the master. But until deep-learning makes A.I. overwhelmingly more effective than human-assisted A.I., the research paper predicts that cybersecurity analysts can pencil in another couple years of job security.

Related Tags