In yet another indicator of the potential rise of deepfakes as a weapon of misinformation, Reddit has this week updated its guidelines to outlaw the use of deceptive impersonation within its app, with specific mention of deepfake content.
As per Reddit:
"We’ve been doing significant work on site integrity operations as we move into 2020 to ensure that we have the appropriate rules and processes in place to handle bad actors who are trying to manipulate Reddit, particularly around issues of great public significance, like elections. To this end, we thought it was time to update our policy on impersonation to better cover some of the use cases that we have been seeing and actioning under this rule already, as well as guard against cases we might see in the future."
That last note it most relevant here. Right now, Facebook, Google and Twitter are each, individually, undertaking their own research on how to combat the potential rise of manipulative deepfakes, and earlier this week, Facebook announced its new policy on manipulated media, also banning "misleading manipulated videos".
While the misuse of deepfakes hasn't become a major issue as yet, it does seem like there's a feeling within the industry that this is close to becoming a significant problem, and with the US Election looming, its an area that they're all looking to get ahead of as a potential concern.
TikTok has also released new regulations outlawing "manipulated content meant to cause harm".
But it's not problem, as such, yet.
Reddit notes that:
"Impersonation is actually one of the rarest report classes we receive... [but] we also wanted to hedge against things that we haven’t seen much of to date, but could see in the future, such as malicious deepfakes of politicians, for example, or other, lower-tech forged or manipulated content that misleads."
Now at 430 million users, Reddit could become a platform of focus for those looking to influence the outcome of the 2020 election, and while its users are considered to be, in general, more skeptical of such claims - and thus, less open to manipulation at scale - there are dedicated subreddits on basically every subject. It's not hard to imagine political operatives looking to grow their messaging via partisan Reddit groups.
But as noted, more relevant here is that yet another social app is taking steps to prepare for the rise of deepfakes. Despite there not seemingly being a rise in the use of deepfakes for this purpose, the increased focus on manipulated media is not coming from nowhere, there's an underlying reason why the platforms are preparing for the next wave. The indicators would suggest that we need to also be thinking about how we can remain alert, and wary to such content, in order to avoid large scale misinformation campaigns fueled by fake videos.
It's not happening yet, but something to watch for moving forward.
And also, Reddit notes that its new rules don't apply to other uses of manipulated media:
"This doesn’t apply to all deepfake or manipulated content - just that which is actually misleading in a malicious way. Because believe you me, we like seeing Nic Cage in unexpected places just as much as you do."
Nic Cage good, political misrepresentation bad. The tone around such is fairly tame at the moment, but definitely, deepfakes look set to become a much larger concern.