Loading...

Loading...

How to spot deepfake videos – and why you should care

  • Posted on July 12, 2019
  • Estimated reading time 5 minutes
deepfake videos

Until a few weeks ago, many people probably weren’t familiar with the term “deepfake.” But that changed when a deepfake video was posted to Facebook-owned Instagram. The video falsely portrayed Facebook CEO Mark Zuckerberg as saying, “Imagine this for a second: One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures.”

Deepfakes entered the public eye in late 2017, when an anonymous Redditor under the name “deepfakes” began uploading videos of celebrities stitched onto the bodies of pornographic movie actors. The first examples involved tools that could insert a face into existing footage, frame by frame – a glitchy process then and now.

Can deepfake videos be verified?
As the technology available to the public is relatively immature, it’s still possible to spot an amateur deepfake – although the state of the art is quickly advancing.

It’s also important to note that professionally faked videos don’t require advanced technical trickery to become viral – a video of Nancy Pelosi, the U.S. Speaker of the House, was distributed widely over social media. It was doctored by simple video distortions, including slowing the footage speed. This wasn’t a deepfake, as the technology used has been available for decades, but it still fooled many online commentators.

Some pointers to identify deepfakes include:

  • Authentic sources. Is the video coming from a source you’d expect to hear from? For example, it’s more likely that a major press conference will be reported by a large media organization than a small blogger. This has potential repercussions, however, for citizen journalists, acting as a bar to entry for new media sources.
  • Facial morphing. A lot of the deepfake tools simply stitch images over the top of another person’s face. This quickly pulls you into the “uncanny valley,” where something looks almost human but not quite. If the face shows little emotion or has skewed angles, then look closer to see if it matches the surrounding body. You may have a deepfake.
  • Body shape. Most tools and techniques focus on facial features. This means you can often identify a deepfake through whole-body shots. For example, a recent stunt saw Nicholas Cage moved into the starring roles of films from “Terminator 2” to “Indiana Jones.” In these shots, you can see that the set of the shoulders and the overall body shape don’t match up. This might be another sign of a deepfake.
  • Sound generation. Most deepfake tools focus on generating convincing visuals. Whilst there are voice font services that can replicate another person’s vocal identity, these currently require hours of audio clips, and they still sound robotic. A clip with robotic intonation, a lack of filler words or slightly odd pronunciation might be a deepfake. If audio is missing altogether or the audio doesn’t lip-sync? Well, treat the video with a pinch of salt.

Whose job is it to verify content?
As the technology improves, concern has arisen around how deepfakes could be used to undermine democracy and the way in which we trust information. One of the areas being explored is: Whose job is it to police content and ensure that it’s legitimate?

Well, this goes back to a broader issue. We see digital ethics as one of the main business trends that will continue to occupy more time and attention from the C-suite and the board. Much like security was a few years ago, digital ethics is a new area where there isn’t a clear set of patterns and practices to reference.

Our recent research with over 1,200 senior executives across 12 countries found that 81% of respondents admitted they’re not completely confident their organization is adequately prepared to address ethical issues related to new technologies. And 82% agreed that digital ethics is the foundation of successful AI.

The role of organizations: There are two paths businesses can take: You can manage ethical issues, including deepfakes, from a risk and compliance standpoint; or you can focus on ethics by design and bake it into the building of new products and services. We think the second is the best route.

You need to add human intervention anyplace where a machine is making a decision, such as with deepfake videos. Companies need to deal with issues like deepfakes if they want to engage a community – including customers and employees – around trustworthy content.

The role of government: Government bodies in the U.S. are in a rush to figure out how to detect faked content ahead of the 2020 elections, and several startups are angling to become arbiters of truth as the campaigns get underway. In Congress, politicians have called for legislation banning “malicious use” of faked content. And the European Union has launched an “Action Plan Against Disinformation,” which focuses on raising awareness, independent fact-checking and transparency.

The role of technology: But policing deepfakes won’t be easy. As the technology improves, the industry perspective is that technology will eventually make it hard to distinguish fakes from reality. The costs will go down or a better-trained model will be released, enabling some savvy person to create a powerful online tool.

The main solution seems to be positive authentication, through the use of contextual clues like those outlined above. Another approach could be the use of digital markers throughout broadcast, to confirm that a trusted source believes the video source. For example, on a television broadcast, additional metadata would assert that the video is being transmitted live.

Fact-checking is another approach, after the broadcast and creation of a clip. Facebook and Instagram already fact-check, labeling and burying fake news and hoaxes in their news feeds. Posting links from such flagged stories prompts a red alert stating the article has been disputed by the fact-checking organizations. Major technology platforms, including Twitter and Google, have also formally committed to a code of practice from the European Union, which focuses on refining monitoring techniques and responding to disinformation.

How effective is content verification?
Unfortunately, studies show that tagging fake news and down-promoting stories and videos doesn’t necessarily work. A study reported by POLITICO found that tagging false news stories as “disputed by third party fact-checkers” has only a small impact on whether readers perceive their headlines as true. Overall, the existence of “disputed” tags made participants just 3.7 percentage points more likely to correctly judge headlines as false, the study said.

Why? Because ideological fake news lands in the social media feeds of audiences who are already primed to believe whatever story confirms their worldview.

Ultimately there is no perfect solution. The way forward seems to be a combination of government, technology and policy approaches.

As curating problematic content becomes more difficult on a large scale, and the very nature of internet communication allows that content to be amplified much more than before, authenticity will become a selling point.

This means improved public awareness of deepfakes. Some of this starts in schools; for example, many colleges teach critical thinking, and how to discern and evaluate argument. It also means legal remedies, such as the ability to challenge falsified information and regulate the use of faked video. Finally, it will require technology partnerships to enable media organizations to positively verify video as it is broadcast and for reactive fact-checking of deepfake videos as they are identified.

Molly Barley

Great information, Chris. Thanks for sharing!

July 26, 2019

Ken Ramoutar

Very informative blog.

July 25, 2019

Madhushree Bharati

AWESOME ARTICLE, KEEP UP THE GOOD WORK.

July 16, 2019

Avanade Insights Newsletter

Stay up to date with our latest news.

Share this page
CLOSE
Modal window
Contract