X
Innovation

​Facebook's fact-checkers train AI to detect "deep fake" videos

Facebook is facing an uphill battle automating the detection of misinformation in photos and videos.
Written by Liam Tung, Contributing Writer

So-called "deep fakes" are now a major concern for US lawmakers worried that AI-manipulated videos depicting people doing or saying things they never did could become a national security threat.

Following last week's hearing where Facebook COO Sheryl Sandberg was asked how Facebook would warn users about deep fake videos, the company has announced it is now expanding its review of articles with fact-checking partners to video and images.

All 27 of Facebook's fact-checking partners in 17 countries will be able to contribute to reviews. US fact-checking parties include the Associated Press, factcheck.org, Politifact, Snopes, and conservative paper The Weekly Standard.

Facebook says it has built a machine-learning model to detect potentially bogus photos or videos, and then sends these to its fact-checkers for review. Third-party fact-checking partners can use visual verification techniques, including reverse image searching and image metadata analysis to review the content.

"Fact-checkers are able to assess the truth or falsity of a photo or video by combining these skills with other journalistic practices, like using research from experts, academics or government agencies," said Facebook product manager Antonia Woodford.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Facebook intends to use its collection of reviewer ratings of photos and videos to improve the accuracy of its machine-learning model in detecting misinformation in these media formats.

It's defined three types of misinformation in photos and video, including: manipulated or fabricated content; content that's presented out of context; and false claims in text or audio.

Facebook offers a high-level overview of the difficulties identifying false information in image and video content compared to text, and some of the techniques it's using to overcome them. But overall the impression is that Facebook isn't close to having an automated system for detecting misinformation in video and photos at scale.

Currently, it's using OCR to extract text from photos, such as a bogus headline on a photo, in order to compare the text to headlines from fact-checkers' articles. It's also developing ways to detect if a photo or video has been manipulated. For this, it's using audio transcription to compare whether the text it extracts from audio matches claims in text that fact-checkers have previously debunked.

"At the moment, we're more advanced with using OCR on photos than we are with using audio transcription on videos," said Facebook product manager Tessa Lyons.

As with articles, Facebook will focus on identifying duplicates of false videos and photos once a fact-checker has confirmed it as false.

SEE: Cybersecurity in an IoT and mobile world (ZDNet special report) | Download the report as a PDF (TechRepublic)

Lyons said Facebook is "pretty good" at finding exact duplicates of photos, but when images are slightly manipulated it's much harder for Facebook to automatically detect.

"We need to continue to invest in technology that will help us identify very near duplicates that have been changed in small ways," said Lyons.

And detecting when something has been presented out of context is also a major challenge, according to Lyons.

"Understanding if something has been taken out of context is an area we're investing in but have a lot of work left to do, because you need to understand the original context of the media, the context in which it's being presented, and whether there's a discrepancy between the two," she noted.

The impact of misinformation in photo and video content also differs by country. Facebook has found that in the US most people report seeing misinformation in articles, whereas in Indonesia people more often report seeing misleading information in photos.

"In countries where the media ecosystem is less developed or literacy rates are lower, people might be more likely to see a false headline on a photo, or see a manipulated photo, and interpret it as news, whereas in countries with robust news ecosystems, the concept of "news" is more tied to articles," said Lyons.

RECENT AND RELATED COVERAGE

Facebook unveils tool to automatically fix bugs

The tool, called SapFix, has been used to ship stable code updates to millions of devices using the Facebook Android app.

EU smacks internet in the face with link tax and upload filter laws

In one of the dumbest technology regulation moves since the US's FCC destroyed America's net neutrality, the European Union's copyright law overhaul promises to wreck today's internet.

Facebook data privacy scandal: A cheat sheet (TechRepublic)

Read about the saga of Facebook's failures in ensuring privacy for user data, including how it relates to Cambridge Analytica, the GDPR, the Brexit campaign, and the 2016 US presidential election.

Facebook backs new Brazil-Argentina submarine cable

The undersea link is expected to start operations in 2020.

Facebook AI now fixes bugs like spellcheck corrects typos (CNET)

SapFix could free programmers from coding drudgery.

Alphabet's Loon balloons just beamed the internet across 1000km

Loon engineers can now boost internet coverage using a web of balloons connected to a single ground access point.

Editorial standards