As artificial intelligence becomes increasingly adept at creating realistic images, it’s crucial to develop a discerning eye. The internet is now awash with visuals that may seem genuine but are actually fabricated by AI. However, don’t despair – humans aren’t destined to be perpetually deceived. By understanding the nuances of AI image generation, you can learn to identify these digital forgeries.
Diffusion models, the AI systems behind text-to-image generation, learn by reversing the process of image noise. While remarkably effective, this learning process often leaves behind telltale signs, or “artifacts and implausibilities,” as Matt Groh, an assistant professor at Kellogg School of Management, describes them. These imperfections occur because the models, despite their sophistication, lack a true understanding of the real world.
“These models learn to reverse noise in images and generate pixel patterns in images that match text descriptions,” explains Groh. “But they are not inherently trained to grasp concepts like spelling, the laws of physics, human anatomy, or even simple functional interactions like how buttons fasten a shirt or how backpack straps work.”
Groh and his colleagues, Negar Kamali, Karyn Nakamura, Angelos Chatzimparmpas, and Jessica Hullman, conducted research using AI systems like Midjourney, Stable Diffusion, and Firefly. Their work involved generating and analyzing a collection of images to pinpoint the most frequent artifacts and implausibilities that betray an AI-generated image. Their findings highlight that AI progress isn’t always linear, with image quality sometimes unexpectedly declining, particularly in depictions of well-known figures.
Drawing from this research, they’ve identified five key areas to examine when you’re questioning, “Is This Photo Ai?”. These practical takeaways, complete with illustrative examples, will empower you to become a more critical consumer of online imagery.
1. Anatomical Oddities: The Body Language of AI Fakes
One of the quickest ways to assess if a photo is AI-generated is to scrutinize the anatomy of the subjects. AI models frequently struggle with accurately rendering human bodies, often producing figures that deviate from reality in subtle, yet noticeable ways. Look for anatomical implausibilities: are there limbs that seem to appear or disappear? Are there an unusual number of fingers or toes? Do body parts merge into the background or other figures in an unnatural manner? Could a neck be stretching to improbable lengths? AI-generated portraits sometimes feature teeth that are misaligned or asymmetrical and eyes that appear excessively glossy, unfocused, or hollow.
“If something about a person’s appearance seems slightly off,” Groh advises, “check the shape of their pupils. AI often struggles to render perfectly circular pupils.” Pupil shape can be a subtle but effective indicator of AI involvement.
For images purporting to depict public figures, a valuable verification technique is to compare them against known, authentic photographs from reputable sources. Deepfakes of figures like Pope Francis or Princess Kate Middleton, for instance, can be cross-referenced with official portraits to reveal inconsistencies in features such as the Pope’s ears or Middleton’s nose. “Facial portraits can be particularly tricky to evaluate because they often lack the broader contextual clues present in more complex scenes,” Kamali notes.
While AI is becoming increasingly proficient at generating realistic faces, hands remain a significant challenge. AI-generated hands often exhibit anomalies like missing fingernails, fused fingers, or disproportionate sizes. However, it’s important to remember human bodies exhibit natural variation. An extra finger or a missing limb doesn’t automatically confirm an image as fake. “False positives can occur with anatomical checks,” Groh cautions. “Some individuals naturally have six fingers on one hand.”
2. Stylistic Quirks: The AI Aesthetic
Another clue that an image might be AI-generated lies in its overall aesthetic. AI-generated images often possess a distinct stylistic signature that deviates from genuine photography. Skin might appear unnaturally smooth and shiny, colors overly vibrant, or faces almost too perfect. When assessing “is this photo AI?”, ask yourself: “Does the image have an artificial sheen? Does the color saturation seem excessive? Does it resemble the hyper-perfected imagery often seen on platforms like Instagram?”
While these stylistic artifacts might not be immediately apparent in isolation, viewing a series of AI-generated images can sharpen your perception. “After seeing just a few examples, you start to recognize a pattern,” Groh notes. “It’s a subtle ‘Photoshopped’ or ‘too perfect’ quality. The over-perfection is a giveaway.”
This stylistic tendency stems partly from the training data used for AI models. These models are frequently trained on images of professional models—individuals whose profession involves being photographed to look their best and having their images widely circulated. “They’re paid to have pictures taken of them,” Groh points out. “Everyday people simply don’t have as many readily available images of themselves online for AI to learn from.”
Other stylistic indicators include mismatches between the lighting of the subject and the background, glitch-like smudges, or backgrounds that appear to be composited from disparate scenes. Overly dramatic, cinematic backgrounds, wind-swept hair, and hyperrealistic detail can also raise suspicion, although genuine photographs can be edited to achieve similar effects. These stylistic elements are helpful clues but not definitive proof of AI generation. “It’s not a guaranteed confirmation of a fake,” Groh clarifies. “It’s more about triggering your intuition, a ‘spidey sense’ that something might be off.”
Interestingly, intentionally minimizing these typical AI stylistic traits can make detection more challenging. “We were surprised how easily AI-generated images slipped past people’s detection when we reduced the overly cinematic style often associated with AI,” Nakamura states.
3. Functional Flaws: When Objects Misbehave
Since text-to-image AI models lack real-world understanding of how things function, examining the objects within an image and how people interact with them can reveal AI fabrication. Functional implausibilities offer another avenue for answering “is this photo AI?”. Look for objects behaving in illogical ways.
A sweatshirt displaying a college logo might feature misspelled words or an unconventional font. A diner in a restaurant scene could have their hand inexplicably inside a hamburger. A tennis racket might have strings that are unnaturally slack. A pizza slice that should be floppy might be rigidly straight.
“More complex scenes provide more opportunities for these functional errors to emerge, and offer additional context that aids in detection,” Kamali explains. “Group photos, for example, tend to expose inconsistencies more readily.”
Pay close attention to the details of objects, such as buttons, watches, or buckles. Groh highlights “funky backpacks” as a classic AI artifact. “When there are interactions between people and objects, things often just don’t look quite right,” Groh observes. “Sometimes a backpack strap appears to merge into a shirt. That’s not how backpack straps work. Logically, the backpack would likely fall off.”
4. Physics Follies: Defying the Natural Laws
Expanding on the theme of real-world understanding, inconsistencies in shadows, reflections, and perspective can also betray an AI-generated image. Violations of physics are another red flag when determining “is this photo AI?”. AI-generated images frequently exhibit warping, distorted depth, and perspective errors. Imagine staircases that simultaneously appear to ascend and descend, or stairs that lead nowhere at all.
“With more complex scenes, it becomes clear that diffusion models lack an understanding of basic logic,” Nakamura points out, “including the logic of human anatomy and the consistent rules of light, shadows, and gravity.”
In AI-generated photos, shadows often defy the laws of physics. They might fall at conflicting angles from their apparent light sources, as if illuminated by multiple suns. Reflections in mirrors might present a different image than reality, such as a person wearing a short-sleeved shirt while their reflection shows a long-sleeved one. “As cool as it might be, your mirror reflection can’t spontaneously change your shirt,” Groh humorously notes.
5. Sociocultural Slip-ups: Context is Key
Finally, AI models often lack a nuanced understanding of sociocultural context. Sociocultural implausibilities can be a less technical but equally potent indicator when asking “is this photo AI?”.
If your understanding of a public figure’s views clashes dramatically with the image’s portrayal, skepticism is warranted. For example, an AI-generated image of Taylor Swift endorsing a political candidate drastically opposed to her known values should raise immediate red flags. “When you’re attuned to sociocultural context, you can instantly sense when something in an image feels ‘off’,” Kamali explains. “This feeling often prompts closer scrutiny, leading you to uncover other types of artifacts and inconsistencies.”
Lacking cultural sensitivity and historical awareness, AI models can produce jarring and improbable images. A subtle example is an image of two Japanese businessmen embracing in an office setting. “Karyn Nakamura, who is Japanese, pointed out that male hugging is uncommon in Japan, particularly in professional environments,” Groh explains. “It’s not impossible, but it’s not typical.”
While complete cultural and historical knowledge is impossible for any individual, some inconsistencies will be immediately obvious. You don’t need deep historical expertise to recognize the absurdity of a photo depicting Martin Luther King, Jr. holding an iPhone. “As humans, we rely heavily on context,” Groh concludes. “We consider cultural norms and historical plausibility. If something seems wildly out of place, it’s worth questioning, ‘is this photo AI?'”
Editor’s Note: For more detailed examples and guidance, Groh and his team have compiled a comprehensive report available for review here.
Featured Faculty
Matthew Groh
Assistant Professor of Management and Organizations
About the Writer
Anna Louie Sussman is a writer based in New York.