After calling for public submissions into its policy on deepfakes last month, Twitter has now released its draft rules for handling such content on its platform, and addressing concerns with digitally manipulated content.
As explained by Twitter:
"When you come to Twitter to see what’s happening in the world, we want you to have context about the content you’re seeing and engaging with. Deliberate attempts to mislead or confuse people through manipulated media undermine the integrity of the conversation."
Deepfakes have become a major focus for online providers in recent times, with both Google and Facebook also launching new research initiatives to help them detect and action the same.
For Twitter's part, it's planning to implement a new set of processes which will:
- Place a notice next to Tweets that share synthetic or manipulated media;
- Warn people before they share or like Tweets with synthetic or manipulated media; or
- Add a link – for example, to a news article or Twitter Moment – so that people can read more about why various sources believe the media is synthetic or manipulated.
Of course, these measures are relative to detection, which is another element of the research, but Twitter's looking to get ahead of the game by ensuring that it has clear policies in place for dealing with deepfakes before they become a bigger concern.
Right now, deepfakes - or digitally altered videos/images which appear to show a person doing or saying something they didn't - seem to be more like a novelty, an interesting experiment, they don't seem to be a major privacy or security issue. But they will become a more significant concern in this respect - for example, check out what people can do with a simple app which places their likeness into a move or TV scene:
In case you haven't heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of 'Deepfake'-style AI facial replacement I've ever seen.
— Allan Xia (@AllanXia) September 1, 2019
Here's an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) ???? pic.twitter.com/1RpnJJ3wgT
Even further, take a look at the level of sophistication that deepfakes can achieve, with seamless visual integration within video content.
You can imagine, then, how the same could be translated into other avenues, even into official announcements from politicians, which could convince enough people that it's legit.
But that's clearly not legit, right? That's clearly not Barack Obama speaking. We can see that, and we'll be able to see the same in future fakes, avoiding potential manipulation. Right?
Given the issues we've faced with "fake news" in recent times, and the use of often older or out of context video footage or images in order to provoke emotional response, this is a major area of concern.
For example, this video was doing the rounds on Facebook last year, purporting to be a Muslim man defacing a Christian sculpture in Italy:

As you can see, the video amassed millions of views, and was re-shared by many Facebook users, most of whom added their own hate-filled comments like the above.
Except, this isn't a video of a Muslim refugee ruining a religious statue in Italy. The video was actually from an incident that happened in Algeria in 2017 - the man attacked the statue on the Ain El Fouara fountain because it depicts a naked woman, which he believes is indecent. The same statue has been vandalized several times for the same reason, as Algeria is a majority Muslim nation, and many see the depiction as distasteful.
But the truth isn't clear from the footage alone, and research has shown that people will look to share content which supports their own beliefs, and will therefore be less likely to fact-check the same.
This already happens now. You can imagine that deepfakes will only 'deepen' such problems.
While it doesn't seem like a major issue right now, it's clear that deepfakes will become a problem, which is why its good to see Twitter, along with Facebook and Google, moving to get ahead of it now. Because in a couple of years time, it won't be a video from 2017 that's being circulated to incite anger, it'll be statements from politicians that they never made, but which might just convince enough voters to sway the balance.
Imagine, for example, that a political activist group released a fairly convincing deepfake video of a candidate making a concerning statement, and released it on polling day, limiting the window for clarification before people cast their vote.
Again, this is a major concern, and a key area for platforms to address.
Twitter says that it will continue to revise its deepfakes policy in the coming months.