AI-generated faces are now so realistic that even people with exceptional face-recognition skills struggle to tell them apart from real photos. A new study on AI-generated faces shows that most of us perform only slightly better than chance when asked to judge whether a face is real or AI, yet we remain highly confident in our abilities. This gap between confidence and reality matters, because it makes people and organisations more vulnerable to scams, fake profiles and misinformation built on convincing synthetic images.
What surprises researchers is not just that many people are fooled, but that the cues we trust most have quietly stopped working. The obvious glitches that once gave AI away, like mangled glasses or misshapen ears, are largely gone in top-tier systems, leaving faces that look “too perfect” to reliably judge with the naked eye.
What did the AI-generated faces study actually test?
The AI-generated faces study in question was run by researchers at UNSW Sydney and the Australian National University, and published in the British Journal of Psychology. The team recruited 125 people and gave them an online test where each participant saw a series of portrait-style photos and had to decide whether each face was real or created by artificial intelligence. Before the test, the researchers screened out any images with obvious visual flaws so that participants were judging only among high-quality, highly realistic faces.
The sample included 36 super-recognisers, people with exceptional natural ability to remember and distinguish human faces, and 89 control participants with typical face-recognition skills. According to UNSW psychologist Dr James Dunn, people with average ability performed only slightly better than if they had been guessing at random, and even super-recognisers were only modestly more accurate than everyone else.[UNSW summary]
People with average face-recognition ability performed only slightly better than chance. Super-recognisers did better, but only by a slim margin.
Despite these middling scores, participants tended to feel very confident that they could “just tell” when a face was AI-generated. That mismatch between how good we think we are and how good we actually are is the central finding of the study, and it is what worries the researchers most.
How realistic are AI-generated faces now?
Early AI face generators had telltale artefacts that many people learned to spot. You might notice distorted teeth, earrings that melted into hair, glasses that did not sit on the ears properly, or background details that seemed smeared into the subject’s skin. These were reliable rules of thumb for a few years. In the new AI-generated faces study, those obvious clues were deliberately removed before participants ever saw the images.
Modern face-generation systems, such as those based on diffusion models, are trained on millions of photos and tuned to match the statistical patterns of real faces very closely. The result is portraits with consistent lighting, plausible skin texture, natural-looking eyes and realistic depth of field. According to UNSW and ANU researchers, the most advanced outputs now rarely show the dramatic glitches that used to signal “this is fake.” Instead, AI faces look like they could easily belong in a professional headshot library, a casting catalog or a social media profile gallery.[UNSW summary]
The most realistic AI outputs no longer show obvious flaws, leaving faces that are convincing at a glance and far harder to judge using familiar cues.
Interestingly, what still sets many AI faces apart is not that something is obviously “wrong” but that the face can seem a bit too polished. Researchers describe them as unusually average, highly symmetrical and statistically typical compared with a random real person. In other words, AI systems are very good at generating faces that sit near the center of “face space” rather than the edges, so they may look more like movie supporting cast members than the messy variety of people you see on a bus.
Why are people overconfident about spotting AI faces?

Overconfidence in spotting AI-generated faces seems to stem from two main issues: outdated experience and misleading cues. First, many people base their self-assessment on earlier generations of AI or on low-effort examples they have seen online. If the last time you paid close attention to synthetic images was when older tools frequently produced six fingers and warped pupils, it is natural to think you still have the upper hand. But those examples do not represent what cutting-edge face-generation systems can do today.
Second, the mental checklists people use often rely on cues that are no longer reliable. Participants in the UNSW study, and in online discussions of the test, frequently mentioned looking for odd jewelry, inconsistent backgrounds, or unnatural bokeh blur behind the subject. While these details can sometimes still reveal an AI image, they are not systematic enough to give you more than a small advantage over chance. The study found that even super-recognisers only showed a modest boost in accuracy, suggesting that there is not yet a simple, learnable rule set that reliably separates real from synthetic.[UNSW Face Test]
In psychology, this kind of miscalibration between confidence and performance is common in tasks that feel intuitive. Face perception is a skill humans rely on constantly, so we tend to trust our gut. The problem is that AI-generated faces are built to exploit those very intuitions, matching the statistics of real faces so closely that our usual shortcuts break down without us noticing.
How did super-recognisers perform in the AI faces study?
Super-recognisers are people who score exceptionally well on tests of face memory and identification. In other research, they can pick out a person from a crowd after a brief encounter or match unfamiliar faces across different angles and lighting with impressive accuracy. Given these talents, you might expect them to easily outclass everyone else when looking for AI-generated faces.
In the UNSW and ANU study, super-recognisers did perform better than control participants, but the gap was relatively small. Their accuracy was still far below what they typically achieve when dealing with real human faces. There was also substantial overlap between groups, with some non-super-recognisers outperforming some super-recognisers, which shows that this is not a simple experts-versus-novices story.[UNSW summary]
Some people may be emerging as “super-AI-face-detectors,” but even they are not close to perfect.
What did matter for the high performers was sensitivity to certain subtle qualities. Researchers found that super-recognisers paid more attention to “face-space centrality” (how statistically typical a face is) and to smooth symmetry. They seem to treat faces that are too average and too flawless as suspicious. By contrast, control participants were more influenced by cues like youthfulness. This pattern hints that some people may be learning to pick up on the statistical fingerprints of AI, but it also suggests that these strategies are fragile and might stop working as models keep improving.
What does this mean for scams, social media and trust?

AI-generated faces are already widely used for fake social media profiles, romance scams and “sock puppet” accounts that push political or commercial messages. If most users believe they can reliably spot fakes, but in reality are only slightly better than chance, that creates a serious vulnerability. You may feel reassured by a friendly profile picture that looks “real,” even when the person behind it does not exist.
The study’s authors argue that we need to update how we think about visual evidence. For decades, people treated a photograph of a face as strong proof that a real person was involved. As AI faces become more common in messaging apps, dating sites, professional networking platforms and even identity verification systems, that assumption becomes risky. According to UNSW researcher Dr James Dunn, a healthier stance is to treat profile pictures as just one clue among many, not as definitive evidence of identity.[UNSW summary]
- For individuals, this means being cautious about trusting strangers online based solely on how “genuine” their face looks.
- For organisations, it means not relying only on photo uploads to verify accounts or support decisions that carry financial, legal or safety consequences.
- For platforms and regulators, it points to the need for technical detection tools, provenance systems and clear policies around synthetic media.
Crucially, the researchers do not recommend trying to train the public in a list of visual tricks, because those tricks age quickly as models improve. Instead, the key lesson is to recognise the limits of your own judgement and to assume that highly realistic AI faces are now an ordinary part of the digital landscape.
Can you improve your own ability to spot AI-generated faces?
Right now, there is no evidence that a short tutorial or quick training session can turn the average person into a reliable AI-face detector. The UNSW team hints that some people may naturally be “super-AI-face-detectors,” but they are still working to understand exactly what cues those people use and whether those strategies can be taught to others. As face-generation systems evolve, any specific tells are likely to shift over time.
If you want to gauge your own skills, the researchers have made a brief public version of their test available online. The UNSW Face Test site lets you try a subset of the images, see your score and indicate whether you are willing to participate in future research. It is free, anonymous and does not require a login, and the data helps scientists study why some people detect AI-generated faces more accurately than others.[UNSW Face Test]
In daily life, the safest approach is not to chase perfect detection but to combine visual impressions with context. Ask questions like: Does this profile have a history of posts and interactions that make sense? Is there independent confirmation of this person’s identity, such as professional links or mutual contacts? Is the image being used in a way that pressures you to act quickly, send money or share sensitive information? AI faces by themselves are increasingly hard to trust or distrust at a glance, so focusing on the broader situation is often more effective than staring at pixels.
