Visual literacy used to mean understanding composition or color theory. In 2026, it means knowing if the person in the news report actually exists.
I’ve been cataloging the mistakes AI models make. While they are improving at a frightening speed, they are still statistical models, not artists. They make weird, specific errors because they don't understand the world—they only understand patterns of pixels.
Here is a comprehensive list of red flags to look for, plus a few things that look fake but are actually normal.
The Red Flags (The Tells)
Anatomy & Biology
- The Ear Lobe Test: AI struggles with complex cartilage. Look for ears that merge into the jawline or have nondescript blobs of skin.
- Teeth Count: Too many teeth. Or teeth that are all exactly the same size, like a strip of white Chiclets rather than individual incisors and canines.
- Hair Strands: Hair that starts on the shoulder and disappears into the neck. Real hair follows gravity and growth patterns.
- Pupil Shape: Non-circular pupils or reflections in the eyes that don't match (e.g., one eye shows a window, the other shows a lamp).
- Skin Texture: Overly smooth skin on older people. If a 70-year-old politician has the forehead of a teenager, be suspicious.
Physics & Objects
- The "Melting" Objects: Things in the background (vases, cars, fences) often look like they are melting or blending into each other.
- Inconsistent Shadows: Light hitting the face from the right, but hitting the nose from the left.
- Glasses: Frames that disappear behind the head or have different shapes for the left and right lens.
- Jewelry: Necklaces that are just random loops of gold that don't actually connect.
- Clothing Textures: zippers that go nowhere, or buttons that are just round blobs of color.
Context & Logic
- Nonsense Text: Street signs, book titles, or labels that are alien squiggles.
- Background People: The "NPCs" in the background often have nightmare faces—blurred, distorted, or missing features entirely.
- Architecture: Staircases that lead to walls. Windows that are crooked for no reason.
- Over-Saturation: Colors that are vibrant to the point of unreality.
- Perfect Composition: Everything is centered and balanced perfectly, unlike the chaotic nature of real candid photography.
- Generic Settings: The location looks like "Generic Office" or "Generic Park" with no recognizable landmarks.
- The Vibe: A vague sense that the image is "too epic" or dramatic for the situation.
The False Alarms (Don't Be Fooled)
Just because a photo looks weird doesn't mean it's AI. Here are three things that often trigger false accusations:
1. heavy JPEG Compression
If an image has been screenshotted and reshared 50 times, it gets blocky. This "pixelation" can look like AI artifacts, but it's just digital decay.
2. Motion Blur
A waving hand in a real photo looks like a smudge. People often flag this as "AI failing to render a hand," but it's just shutter speed.
3. High-End Retouching
Celebrity photos are airbrushed to oblivion. Smooth skin and perfect lighting on a magazine cover have been "fake" since the 90s, but they aren't necessarily AI-generated.
When to Use a Detector
If you've gone through the red flags and still can't decide, that is when software helps. Our AI Image Detector analyzes the image file for invisible watermarks and noise patterns that are unique to diffusion models like Midjourney or DALL-E.
It’s useful for those "edge cases" where the visual tells are too subtle for the human eye.
Limitations
Remember: The best liars mix truth with fiction. The hardest images to spot are real photos that have been slightly edited with AI (like adding a person who wasn't there). No tool or checklist is 100% perfect against that yet.
FAQ
Are these red flags permanent?
No. AI models are updated constantly. Version 6 fixed the "six fingers" issue. Version 7 might fix the text issue. You have to keep updating your mental software.
Can I rely on metadata?
Rarely. Most social media platforms strip metadata (EXIF data) when you upload a photo, so you can't check the file info to see if it came from a camera.
Is it safe to share if I'm not sure?
If it's inflammatory or damaging, and you can't verify it: don't share it.
Conclusion
We have to accept that we can no longer trust our eyes implicitly. That sounds scary, but it's also empowering. Once you learn these 17 tells, you stop being a passive consumer and start being an active analyst. You start seeing the matrix.