Saturday, January 11, 2020
Tuesday, January 07, 2020
”Critics say policy does not cover ‘shallow fakes’ – videos made using conventional editing tools
Facebook has announced a new policy banning AI-manipulated “deepfake” videos that are likely to mislead viewers into thinking someone “said words that they did not actually say”, as the social network prepares for the 2020 US election.
But the policy explicitly covers only misinformation produced using AI, meaning “shallow fakes” – videos made using conventional editing tools – though frequently just as misleading, are still allowed on the platform.
The new policy, announced on Monday by Monika Bickert, Facebook’s head of global policy management, will result in the removal of misleading video from Facebook and Instagram if it meets two criteria:
“It has been edited or synthesised … in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say.”
“It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
To date, there have been no major examples of content that would break such rules. Some news organisations, including the BBC, New York Times and Buzzfeed have made their own “deepfake” videos, ostensibly to spread awareness about the techniques. Those videos, while of varying quality, have all contained clear statements that they are fake.
The most damaging examples of manipulated media in recent years have tended to be created using simple video-editing tools. During the UK election, the Conservative party came under fire for a video edited to make it appear as though the Labour MP Keir Starmer had no answer to a question about Brexit. Facebook at the time confirmed the video satisfied its policies on misinformation, and since there was no AI involved in its creation, it would still be allowed today.
In the US, a doctored video that seemed to show the House speaker, Nancy Pelosi, slurring her way through a speech was similarly allowed by Facebook. The video, spread by Trump supporters including Rudy Giuliani, was edited, but not using any technique more complex than slowing down the raw footage and pitch-shifting the audio.
The removal policy is just one branch of Facebook’s attempt to fight misinformation, Bickert argued. “Videos that don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages,” she said. “If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in news feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.
“This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
The company also has a separate policy that allows any content that breaks its other rules to remain online if it is judged “newsworthy” – and that all content posted by politicians is automatically seen as such.
“If someone makes a statement or shares a post which breaks our community standards we will still allow it on our platform if we believe the public interest in seeing it outweighs the risk of harm,” said Nick Clegg, Facebook’s vice-president of global affairs and communications, when he introduced the policy last September. “From now on we will treat speech from politicians as newsworthy content that should, as a general rule, be seen and heard.” That policy means that even an AI-created deepfake video expressly intended to mislead could still remain on the social network, if it was posted by a politician.
Facebook did not give a reason as to why it limited its policy exclusively to those videos manipulated using AI tools, but it is likely that the company wanted to avoid putting itself in a situation where it had to make subjective decisions about intent or truth. Facebook has struggled to settle on a policy about what to do about deepfakes for a number of years, with the company publicly acknowledging the potential damage such videos could inflict, while also standing by a prior decision – thought to be a direct policy of teh founder, Mark Zuckerberg – to avoid ruling on whether or not content on the site is true or false.
America faces an epic choice...
... in the coming year, and the results will define the country for a generation. These are perilous times. Over the last three years, much of what the Guardian holds dear has been threatened – democracy, civility, truth. This US administration is establishing new norms of behaviour. Anger and cruelty disfigure public discourse and lying is commonplace. Truth is being chased away. But with your help we can continue to put it center stage. It will be a defining year and we’re asking for your help as we prepare for 2020.
Rampant disinformation, partisan news sources and social media's tsunami of fake news is no basis on which to inform the American public in 2020. The need for a robust, independent press has never been greater, and with your help we can continue to provide fact-based reporting that offers public scrutiny and oversight. You’ve read more than 26 articles in the last four month. Our journalism is free and open for all, but it's made possible thanks to the support we receive from readers like you across America in all 50 states.
"America is at a tipping point, finely balanced between truth and lies, hope and hate, civility and nastiness. Many vital aspects of American public life are in play – the Supreme Court, abortion rights, climate policy, wealth inequality, Big Tech and much more. The stakes could hardly be higher. As that choice nears, the Guardian, as it has done for 200 years, and with your continued support, will continue to argue for the values we hold dear – facts, science, diversity, equality and fairness." – US editor, John Mulholland