With other social media platforms looking at how they can utilize manipulated media for features, including deepfakes, Facebook has announced the first iteration of its policy to stop the spread of misleading fake videos, as part of its broader effort to pre-empt the potential rise of problematic deepfake videos.
Facebook says that it’s been meeting with experts in the field to formulate its policy, including people with “technical, policy, media, legal, civic and academic backgrounds”.
As per Facebook:
“As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes. Going forward, we will remove misleading manipulated media if it meets the following criteria:
- It has been edited or synthesized – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
- It is the product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.”
Facebook says that it’s new policies do not extend to content which is parody or satire, “or video that has been edited solely to omit or change the order of words”. The latter may seem somewhat problematic, but this type of editing is already covered in Facebook’s existing rules – though Facebook does also note that:
“Videos which don’t meet these standards for removal are still eligible for review by one of our independent third-party fact-checkers, which include over 50 partners worldwide fact-checking in over 40 languages. If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”
So why doesn’t Facebook just remove these as well – if Facebook has the capacity to identify content as fake, and it’s reported as a violation, Facebook could just remove all of it, deepfake or not, and eliminate it as a problem.
But Facebook says that this approach could be counter-intuitive, because those same images/videos will be available elsewhere online.
“This approach is critical to our strategy and one we heard specifically from our conversations with experts. If we simply removed all manipulated videos flagged by fact-checkers as false, the videos would still be available elsewhere on the internet or social media ecosystem. By leaving them up and labelling them as false, we’re providing people with important information and context.”
So, Facebook’s framing its decision not to remove some manipulated content as a civic duty, which is similar to its approach on political ads, which Facebook won’t subject to fact-checking because:
“People should be able to see for themselves what politicians are saying. And if content is newsworthy, we also won’t take it down even if it would otherwise conflict with many of our standards.”
socialmediatoday.com