Innovation

AI-generated Photos Take Advantage of These Optical Illusions — Here’s How to Stump Them

We keep getting fooled by fake images. Our brains are partly to blame.

Written by Alice Sun
Spectral data 5G data Visualization signal glow
Yaroslav Kushta/Moment/Getty Images

Pablo Xavier had a confession. That viral image of the puffer-wearing pope? It’s a fake. In fact, Xavier, a construction worker from outside Chicago, told BuzzFeed News that he created the image with an AI art tool called Midjourney. Viewers around the world were shaken. “No way am I surviving the future of technology,” tweeted Chrissy Teigen.

But were the signs of AI manipulation hiding in plain sight? If you look closely, the pope in the image had glitchy-looking hands. Similar mistakes, like strangely proportioned arms, also hinted at the synthetic origins of former President Trump’s fake arrest photos.

It turns out that there are many common features among AI-generated images that we don’t easily catch. Researchers say it’s due both to how our brains process visuals and to the fact that the level of realism from these new generators is unlike anything we’ve seen before. Despite this, there are ways that our brains can help us spot these fake photos, including one tried and true method that has long been used to fight scams and misinformation.

A brief history of image manipulation

Image manipulation is nothing new. War photographers have been staging scenes since the 19th century. First, by messing with objects, like when Roger Fenton moved cannonballs onto the road in the famous Valley of the Shadow of Death, then with Photoshop, where a photojournalist removed a colleague’s video camera from an image of a Syrian rebel fighter in 2014. More recently, filters on top of photos and videos (like putting Vladimir Putin’s face on another person) have created disinformation around the Ukraine war.

But AI-generated images present a new challenge. They’re what experts call synthetic media, which is fully generated by technology. This includes text typed out by ChatGPT and visuals from text-to-image generators like Midjourney, DALL-E 2, and Stable Diffusion. The accessibility of these new tools has sparked concerns of a new wave of disinformation — the rise of the “synthetic decade,” Beatriz Almeida Saab, a research associate at Democracy Reporting International, tells Inverse.

AI image generators are not perfect. “A lot of times there's a lot of random weird things going on,” says Matt Groh, an MIT computational social scientist who studies how we interact with machine-manipulated media. Most AI generators rely on generative adversarial networks (GANs), which use two competing algorithms to analyze and learn from large datasets to cobble together a final image. While some images made through this method appear ultra-realistic, not all do, since GANs interpret visuals through data rather than in a way that makes sense in the real world.

A few things, in particular, stump image generators, says Michoel Moshel, a neuropsychologist at Macquarie University in Australia. He says things that have a lot of complex parts that vary among images — specifically ears, teeth, hair, and hands (especially hands) — can confuse an algorithm. When creating these elements, AI manufactures strange artifacts: Hair that looks blurry or washed out, teeth that are too small or large, and hands with weird proportions or an unnatural amount of fingers.

“You’ll often see that one eye looks like it's pointing this way. One eye looks like it's pointing straight,” says Moshel. He also adds that backgrounds will appear unnatural in AI-created images. “The foreground image might be really, really realistic, but the back sort of looks almost like it's got partial rendering or just looks odd. Like it's too blurred to be real.”

Why are humans terrible at spotting fake images?

While these blips can help reveal an image’s synthetic origins, few of these mistakes are obvious with a quick glance. Our brains can’t process all the information we see at once, so we use experience to decide what to pay attention to and what to ignore, and this tendency causes us to often miss AI artifacts. It’s also why we often take a long time with games like Spot the Difference, or are befuddled by optical puzzles like the “Thatcher Illusion,” a seemingly normal image where Margaret Thatcher’s eyes and mouth are inverted, explains Groh.

A growing body of research has shown that people aren’t very good at detecting artificially manipulated media, says Nils Köbis, a behavioral scientist at the Max Planck Institute for Human Development. A number of studies have used AI-generated faces to test whether humans can discern between real portraits and fakes. For instance, a study published last year found that humans can only correctly guess a fake face up to 50 percent of the time.

Köbis’s research also reveals another explanation for our inability to identify fake media: People are gullible and overconfident. He published a study in 2021, which found that participants tended to adopt the “seeing is believing” mindset, the belief that we aren’t being tricked when there are no obvious red flags — that what we see is real. Its effect is especially pronounced when we browse through content on social media, since we often use mental shortcuts, called heuristics, to determine credibility when scrolling through our feeds.

In fact, fake visuals are so powerful in convincing us that they have even been found to generate fake memories. In a 2013 paper, over half of the participants falsely remembered a fake political event occurring after they were shown a manufactured video of it. “Even when you see something that has been debunked…our subconscious still processes that information.” says Saab, who co-authored a recent report on the threats of text-to-image generators.

“From a psychological perspective, I think that's somewhat worrying,” since it means evolutionarily, we may not be able to adapt to these technologies, says Köbis. The impact of synthetic media is already underway, adds Saab, where a group of Israeli contractors, called “Team Jorge,” claimed that they manipulated 33 elections using AI-generated campaigns last February.

How to train your mind to spot AI-generated photos

One way that may help us spot false images is critical thinking, the brain’s secondary defense. Researchers recommend examining context — the source of the information — as the first step to take when looking at any form of media online to determine its credibility. “We're getting to a point where synthetic media is getting so realistic that we're going to start to have to treat images and videos and audio that we see online without any context just like we treat a statement that we see without any context,” says Groh. In other words, we need to start approaching visual media with the same skepticism as a sentence on a page.

However, experts warn that it’s only a matter of time before humans will be unable to detect synthetic media altogether, even with skepticism and increased awareness of these tools. Midjourney’s latest version, Midjourney V5, which can now reliably make realistic-looking hands, a common mistake in its previous iterations. “Things have gotten better in the last few years, both on the generation side and the protection side. It's a cat and mouse game,” says Groh.

Instead, the way forward, researchers say, is to find technical workarounds. Organizations like Adobe, Microsoft, and others are banding together to create systems that embed source information within a file’s metadata. New image search tools can help trace whether media originated from AI. These solutions, and others, bring additional safeguards for our brain’s faulty factory settings in an increasingly synthetic world.

Related Tags