How to Spot an AI-Generated Image Like the ‘Balenciaga Pope’

7 minute read

For years, the public has been warned about the risk posed by AI-generated images, also known as deepfakes. But until very recently, it has been relatively easy to discern an AI-generated image from a photograph.

No longer. Over the weekend, an AI-generated image of Pope Francis wearing a Balenciaga puffer jacket went viral online, fooling many internet users.

In just a matter of months, publicly-accessible AI image generation tools have grown powerful enough to generate photorealistic imagery. While the image of the Pope did contain some telltale signs of fakery, it was convincing enough to fool many internet users—including the celebrity Chrissy Teigen. “I thought the Pope’s puffer jacket was real and didn’t give it a second thought,” she wrote. “No way am I surviving the future of technology.”


More from TIME


Although AI-generated images have gone viral before, none have fooled so many people so quickly as the image of the Pope. The picture was whimsical, and to be sure, much of its virality was down to people knowingly sharing it for laughs. But history may regard the Balenciaga Pope as the first truly viral misinformation event fueled by deepfake technology, and a sign of worse to come.

While victims of deepfakes, especially women who have been targeted in nonconsensual deepfake pornography, have warned about the risks of the technology for years, in recent months image generating tools have become much more accessible and powerful, producing better quality fabricated images of any kind. As the power of AI rapidly advances, it will only get harder to discern whether an image or video is real or fake. That could have a significant impact on the public’s susceptibility to foreign influence operations, the targeted harassment of individuals, and trust in the news.

Here are some tips to help spot AI-generated images today, and avoid being fooled by even more convincing generations of the technology in the future.

How to spot an AI-generated image today

If you look closely at the image of the Balenciaga Pope, a few telltale signs of its AI genesis emerge. The crucifix hanging at his chest is held inexplicably aloft, with only white puffer jacket where the other half of the chain should be. In his right hand is what appears to be a blurry coffee cup, but his fingers are closed around thin air rather than the cup itself. His eyelid somehow merges into his glasses, which in turn flow into their own shadow.

Source image courtesy @art_is_2_inspire via Instagram

To spot an AI-generated image today, it often helps to look at these intricate details of an image. AI image generators are essentially pattern-replicators: they’ve learned what the Pope looks like, and what a Balenciaga puffer jacket might look like, and they’re miraculously able to squish the two together. But they don’t (yet) grasp the laws of physics. They have no concept of why a crucifix shouldn’t be able to hover in midair without a chain supporting it, or that eyeglasses and the shadow behind them aren’t the same single object. It is in these often peripheral parts of an image that humans are intuitively able to spot inconsistencies that AI can’t.

But it won’t be long before AI tech improves to correct these sorts of errors. Just weeks ago, Midjourney—the AI tool used to generate the image of the Pope—was incapable of generating realistic images of human hands. To check whether an image of a person was AI generated, you could look to see whether they had seven blurry fingers or some other alien appendage. Not any more. The newest version of Midjourney can generate realistic-looking human hands, removing what was perhaps the easiest way to identify an AI image. With AI image generators advancing so fast, it’s logical to believe the above advice might quickly go out of date.

How to avoid being fooled in the future

For the moment, media literacy techniques might be your best bet to stay abreast of AI-generated images into the future. Asking these questions won’t help you catch 100% of fake images, but they will equip you to spot more of them, and to become more resistant to other forms of misinformation too. Remember to always ask: Where did this image come from? Who is sharing it and why? Is it contradicted by any other reliable information you have access to?

When it comes to detecting viral AI-generated images, your best bet is often checking what others are saying. Google and other search-engines have a reverse image search tool, which allows you to check where an image has been shared on the internet already, and what people are saying about it. Using this tool can allow you to see whether experts or reliable publications have determined an image is fake, or it can help you find the first place an image was shared. If an image purported to have been taken by a news photographer was first posted by a pseudonymous internet user on a social media site, that’s a reason to question its authenticity.

If you’re a Twitter user, the community notes feature can often give you more context about an image—although often not for a while after a tweet is first posted. After the image of the Pope had already gone viral, a note was appended to the original tweet that shared it: “This image of Pope Francis is an AI generated picture and not real,” the note reads. “The image was created on the AI image-generating app Midjourney.”

Are there any technological solutions?

There is plenty of software for sale that claims to be able to detect deepfakes, including an offering from Intel that says it is 96% accurate at detecting deepfake videos. But there are few, if any, free online tools that can reliably tell you whether an image is AI-generated or not. One free AI image detector, hosted on the AI platform Hugging Face, was able to correctly detect with 69% certainty that the image of the Balenciaga Pope was AI-generated. But presented with an AI-generated image of Elon Musk, also produced by the latest version of Midjourney, the tool gave the wrong answer, saying it was 54% certain the image was genuine.

When AI researchers at software company Nvidia and the University of Naples set out to find how difficult it would be to build an AI-generated image detector, they discovered several limitations, according to a November 2022 paper they published. They found that AI image generators do leave invisible, telltale signs in the images they create, and that these “hidden artifacts” look slightly different depending on which specific program was used to generate the image. The bad news was that these artifacts tend to become harder to detect whenever an image is resized or the quality is reduced—as often happens when images are shared and reshared on social media. The researchers made a tool that was able to detect the image of the Balenciaga Pope was AI-generated, according to Annalisa Verdoliva, one of the paper’s co-authors. But, while the tool’s code is available online, it is not embedded in a web app, meaning it is hard for the average user to access.

Another method, described for the first time in a paper published earlier this month, may yield more generalizable results. A team of researchers discovered that when they fed a cutting-edge AI image generator, known as a diffusion model, an image that had been generated by AI, the program could easily produce an almost exact copy of the input image. By contrast, the tool found it difficult to reproduce even an approximate copy of a real photograph. Their finding has not been turned into an accessible online tool yet, but it presents a ray of hope that in the future it may be possible for an app to reliably detect whether an image is AI-generated or not.

More Must-Reads from TIME

Write to Billy Perrigo at billy.perrigo@time.com and Video by Andrew. D Johnson at andrew.johnson@time.com