Science

Does this Joe Rogan deepfake mean video evidence is officially dead?

Experts say deepfakes will be used to swing the 2020 election.

Michael S. Schwartz/Getty Images

A deepfake video featuring an actor made to look and sound like the comedian Joe Rogan surfaced last week, stunning for its audio and visual accuracy. And Rogan was a great choice: He’s put out hundreds of hours-long video podcasts. There is a lot of material to work with.

If you’re not yet aware of what a deepfake is in the first place, it’s a way of using artificial intelligence to create fraudulent audio or video that sounds and/or looks like a particular person. In this case, we’re seeing a fake video of Rogan saying things he never said.

Rogan’s life in the public eye makes it easy to train an A.I system to pick up on the subtleties of how he speaks. It’d be significantly harder to do something like this with the average person, but this technology is advancing pretty quickly.

“The replica of Rogan’s voice … was produced using a text-to-speech deep learning system they developed called RealTalk, which generates life-like speech using only text inputs,” is the explanation from A.I. startup Dessa, which created the video.

The company had previously created an A.I. system that could mimic Rogan’s voice, and now they’ve created a deepfake video that’s relatively convincing.

After Dessa released its audio deepfake of Joe Rogan in May, the team started working on making a realistic deepfake video to go alongside the audio, and what they’ve created isn’t bad.

It’s still pretty obvious the video is fake, but you can see how we’re getting closer to a point where it will be iddeepmpossible to spot a deepfake. This video was created using an actor who looks somewhat like Rogan. They even made the guy shave his head to make it. Watch it below:

Ragavan Thurairatnam, co-founder and chief of machine learning at Dessa, tells Inverse the company knows how it could make a Joe Rogan deepfake that is indistinguishable from a real video of the comedian, but they’re switching back to focusing on deepfake detection.

In a new blog post, Dessa explains how it’s training A.I. to detect if a video is a deepfake. You might say Dessa is using its technology for good, rather than displaying its power and potential for misuse — i.e., fooling people.

Dessa has been training its A.I. using randomly selected deepfakes from YouTube, the Deepfake Impressionist, and its own videos. The company had discovered other deepfake detectors weren’t up to the challenge of spotting today’s deepfakes, and wanted to develop a superior detector.

“The deepfake detection field is far from being solved.”

“The quest to detect them reliably tends to be unending,” Thurairatnam tells me. “This is because of the ‘cat and mouse’ like nature of this problem, in which finding ways of identifying deepfakes ironically tends to provide those developing models used to generate them with techniques to make them more advanced,” the blog post says.

“A big problem with deepfakes is they constantly change,” he says. “It’s something we need to be ever-vigilant about. It’s never going to be like we solved it once and for all. It’s going to be a constant battle.”

Vincent Wong, co-founder of Dessa, tells Inverse he envisions a “main repository” where everyone could contribute deepfakes they’ve made to help improve deepfake detection technology. But again, the cat and mouse process would only be hastened.

As a company, Dessa appears to recognize the many harmful effects deepfakes can have. This technology is not yet at the point where the average person can easily create convincing deepfakes, but we’re going to get there in the not-too-distant future.

“There’s a very high likelihood that deepfake technology — video or voice — will be used as we get closer to the U.S. election—to actually compromise the election,” Thurairatnam says.

See also: A.I. created the madness of deep fakes, but who can save us from it?

It’s easy to imagine how a deepfake could be used to slander a celebrity, a politician or, eventually, you. Whether it’s someone using deepfake technology to make a fake sex tape of someone or someone making a deepfake of the president saying they’re about to nuke a country, we desperately need a way to be able to quickly verify if videos are real or not. That said, just developing detectors may be a losing battle in the longterm.

“Eventually, what will happen is the deepfakes will become so realistic that you won’t be able to stop them with detection,” Thurairatnam says. “We’ll need some other means to be able to prevent the damage that deepfakes cause.”

Beyond detection tools, we really need to educate people on this issue, social media platforms need to develop methods for stopping these videos from spreading and policymakers need to outlaw certain types of deepfakes. These solutions likely won’t completely solve the problem, but there’s only so much we can do.

“It will require a huge cultural shift,” Thurairatnam says.

While a video of Joe Rogan chatting about whatever it is Joe Rogan chats about — or just smoking pot with Elon Musk — might not be a threat to democracy or anything but Joe Rogan, it’s clear we’re getting better and better at creating videos like this that could be used in nefarious ways.

Related Tags