Earlier this month you may have seen a website named ThisPersonDoesNotExist.com doing the rounds, which uses AI to generate startlingly realistic fake faces. Well, here’s the sequel: WhichFaceIsReal.com, which lets you test your ability to distinguish AI-generated fakes from the genuine article. Just head to the site and click on who you think is the real person!
WhichFaceIsReal.com also has a higher purpose though. It was set up by two academics from the University of Washington, Jevin West and Carl Bergstrom, both of whom study how information spreads through society. They think the rise of AI-generated fakes could be trouble, undermining society’s trust in evidence, and want to educate the masses.
“When a new technology like this comes along, the most dangerous period is when the technology is out there but the public isn’t aware of it,” Bergstrom tells The Verge. “That’s when it can be used most effectively.”
“So what we’re trying to do is educate the public, make people aware that this technology is out there,” says West. “Just like eventually most people were made aware that you can Photoshop an image.”
Both sites use a machine learning method known as a generative adversarial network (or GAN for short) to generate their fakes. These networks operate by poring through huge stacks of data (in this case, a lot of portraits of real people); learning the patterns within them, and then trying to replicate what they’ve seen.
The reason GANs are so good is that they test themselves. One part of the network generates faces, and the other compares them to the training data. If it can tell the difference, the generator is sent back to the drawing board to improve its work. Think of it like a strict art teacher who won’t let you leave the class until you draw the right number of eyes on your charcoal portrait. There’s no room for AI Picassos — realism only.
These techniques can be used to manipulate audio and video as well as images. Although there are limitations to what such systems can do (you can’t type a caption for a picture you want to exist and have it magicked into being) they are improving steadily. Deepfakes can turn videos of politicians into puppets and they can even turn you into a great dancer.
With this case of AI-generated faces, Bergstrom and West note that one malicious use might be spreading misinformation after a terrorist attack. For example, AI could be used to generate a fake culprit that’s circulated online, spread on social networks.
In these scenarios journalists usually try to verify the source of an image, using tools like Google’s Reverse Image search. But that wouldn’t work on an AI fake. “If you wanted to inject misinformation into a situation like that, if you post a picture of the perpetrator and it’s someone else it’ll get corrected very quickly,” says Bergstrom. “But if you use a picture of someone that doesn’t exist at all? Think of the difficulty of tracking that down.”
They note that academics and researchers are developing plenty of tools that can spot deepfakes. “My understanding is that right now it’s actually quite easy to do,” notes West. And taking the test above you probably found you could differentiate between the AI-generated faces and real people. There are a number of tells, including asymmetrical faces, misaligned teeth, unrealistic hair, and ears that, well, just don’t look like ears.
But these fakes will get better. In another three years [these fakes] will be indistinguishable,” says West. And when that happens, knowing will be half the battle. Says Bergstrom: “Our message is very much not that people should not believe in anything. Our message is the opposite: it’s don’t be credulous.”
Site Search 360 Custom Site Search