fbpx

A Deep Dive into Deep Fakes

By Debra Kaufman

Wikipedia defines “deep fakes” as a “portmanteau of ‘deep learning’ and ‘fake’,” and refers to videos in which celebrities and politicians are doing or saying things that are shocking, dangerous and … just plain false. Using artificial intelligence, the creators of deep fakes are able to create/superimpose very convincing fake faces and even bodies onto source images and videos, and add very convincing speech and lip synching. One example is Jordan Peele’s “public service announcement” about the menace of deep fakes. At HPA Tech Retreat, I convened a panel on Deep Fakes, with Department of Defense principal engineer Ed Grogan, HBO head of cybersecurity Marc Zorn and Video Gorilla’s head of AI Oles Petriv, all of whom agreed that deep fakes pose a risk. Not just for the mischief they might cause to national and international politics, among other real-world perils, but for the media and entertainment companies that might inadvertently disseminate them.

Tools to manipulate images have become increasingly democratized, as has compute power. But successfully manipulating human faces, lip-synching and body motion was the sole purview of artificial intelligence researchers and other academics – until very recently. At the heart of deep fakes is GANs, which standards for generative adversarial networks. Based on Google’s open source machine learning software TensorFlow, GANs are composed of two neural networks that teach each other to produce photoreal images.

The generative network “generates” images and the discriminative network evaluates them. Over time, the generative network spits out better images, and the discriminative network hones its ability to spot errors – all without human intervention.

Deep fakes first appeared in 2017 in the form of fake pornography on Reddit, contributed by a Reddit user dubbed Deepfakes; the Reddit community jumped in to fix bugs and improve the software. By January 2018, the free open-source FakeApp application launched – allowing almost anyone to create his or her own deep fakes. More tools followed, including one from UC Berkeley that lets anyone dance like a pro. The Max Planck Institute for Informatics in Germany rolled out a technique that allows one person to take control of another person’s face to make it say anything. FaceSwap, DeepFaceLab, myFakeApp and LyreBird for voice synthesis are all widely available for free.

One expert in the field, who prefers to remain anonymous, noted the dilemma posed by deep fakes. “Professionals will evaluate both the picture (or video) and compare it with the story it’s trying to tell,” he said. “They’ll look for other evidence to backup or refute what the image is saying, then draw their conclusions. It’s important to note that the image could be real, but the story could be a fake – so we dispute the story. The image could be fake, so we just regard the image. And both could be fake. We look to multiple sources to derive the truth.” But, he added, the general public “through lack of knowledge, lack of time, or selective evaluation may accept what’s presented … and these people with incorrect information may still derive a conclusion that is detrimental to society and act on it.” With the growth of fake news, deep fakes and web attacks, he concluded, how do you prove that some piece of media is real?

Many in the media and technology industries are worried and taking action. Facebook has stated that it has a machine-learning model to detect possibly fake videos and send them to human fact checkers for further review. But given that company’s trouble with disseminating misinformation and hate speech, it’s obviously a flawed model. The Wall Street Journal has spoken out (in print) about the threat, announcing that it created the WSJ Media Forensics Committee, an internal deepfakes task force led by the Ethics & Standards and the Research & Development team. The news outlet is also training reporters, developing “newsroom guides” and collaborating with Cornell Tech “to identify ways technology can be used to combat this problem.”

Some tool manufacturers are equally concerned. According to audio synthesis company Lyrebird cofounder/chief executive Alexandre de Brebisson, his company is “exploring different directions including crypto-watermarking techniques, new communication protocols, as well developing partnerships with academia to work on security and authentication.”

According to reports, the Department of Defense, through DARPA, created tools for identifying deep fakes Media Forensics, which automates existing forensic tools, is now looking at deep fakes, discovering “subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations,” says DARPA program manager Dr. Matthew Turek. A San Francisco-based startup, Unveiled Labs, has introduced Amber, an iOS app of patented tools for detecting all kinds of fakes. For more information, check out this Bloomberg reporter’s take on deep fakes.

https://www.youtube.com/watch?v=cQ54GDm1eL0  – Jordan Peele

https://www.youtube.com/watch?v=gLoI9hAX9dw&t=32s — Bloomberg

 

We use non-personally-identifiable cookies to analyze our traffic and enhance your HPA site experience. By using our website, you consent to the placement of these cookies. Learn More »

Pin It on Pinterest