DeepPrivacy AI Uses Deepfake Tech to Anonymize Faces and Protect Privacy

Recent advances in artificial intelligence are helping us to accelerate new discoveries in medicine and science, in addition to expanding the range creative possibilities in art, music and literature. But like any other tool, AI can be used for more nefarious ends — such as generating faked images or videos (known as “deepfakes”), as well as facilitating mass surveillance — a practice that is on the rise in authoritarian countries like China, and even in liberal democracies. In the case of deepfakes, AI algorithms can be used to produce eerily convincing videos of politicians, while AI-assisted facial recognition systems can be used to identify people who might dare to speak out against repressive regimes.
But it may be possible to fight fire with fire, so to speak, by using the same technology to thwart facial recognition algorithms and generating a deepfaked appearance in order to protect one’s privacy. Researchers at the Norwegian University of Science and Technology have developed such a technology, which uses machine learning algorithms to seamlessly and automatically replace one’s face in real-time with a variety of anonymous faces, sourced from a database of 1.47 million images. Recently presented at the International Symposium on Visual Computing, the team’s paper suggests that such “de-identification” technology could help safeguard the identities of people who want to remain anonymous in photographs or in livestreamed online videos.
Dubbed DeepPrivacy, the system utilizes what is known as a generative adversarial network (GAN) to swap out the original face in a photo or video with a different one that is synthesized from a database of over a million Creative Commons-licensed facial images taken from Flickr. While the concept is nothing new, what’s intriguing in this work is that the facial substitution happens almost seamlessly, with the GAN dynamically retaining original “conditions” such as the subject’s facial expressions, the existing background, and the initial pose of the subject’s body. In addition, as an extra layer of added protection, the DeepPrivacy system is designed to not use any privacy-sensitive information in the original face; instead, it uses “keypoints” indicating the positioning of the nose, mouth, eyes and so on in order to simulate a new face.

First column: original photo. Second column: bounding box showing the model where anonymization is needed. Third column: final result.
As we’ve seen previously in experiments that attempt to fool both humans and machines, generative adversarial networks function by pitting two neural networks against each other: a “generative” network that pumps out faked images, and a “discriminator” network that evaluates the simulated images as either real or counterfeit. The goal is for the generative network to increase the error rate of the discriminator network by “fooling” it over and over again, so that the system can gradually train itself to generate more and more persuasive — but ultimately bogus — images.
To achieve this, the DeepPrivacy system works by first superimposing a bounding box over the original face. Relevant facial “keypoints” like the location of the eyes, nose and shoulders are noted. The DeepPrivacy model then draws upon Flickr Diverse Faces (FDF), an open source dataset of over one million faces that was culled from an existing database of over 100 million Creative Commons images from Flickr, which were selected for their diversity of appearance, facial expressions, unconventional poses and backgrounds. Each of these faces in the FDF dataset are preloaded with an anonymizing bounding box and annotated “keypoints”— allowing the system to quickly generate a new, unique face that effectively conceals the subject’s actual identity.

More examples of photos where the DeepPrivacy algorithm has been used.
As we can see, the method works quite well for swapping faces in static images, while in videos where the facial information is constantly changing from frame-to-frame (such as this one uploaded by the team), the faces are still a bit blurry and contain some residual visual artifacts. While there remains some room for improvement, the system nevertheless offers up some useful advantages over other similar techniques: its design permits full anonymization of a person’s face, and still performs relatively well even when confronted with a variety of challenges like odd poses, partially covered faces and varying backgrounds. Of course, such “de-identification” tools will someday have to expand to cover other possible biometric identifiers like items of clothing, hair or cranial shape in order to keep up with ever-evolving technology.
Check out the DeepPrivacy code on Github.