With various politically-affiliated groups already using digital platforms to manipulate and influence voters, the rise of deepfakes is a serious concern, and could pose a major threat to democracy as we know it.
That's why all the major platforms are working to develop systems to detect digitally altered videos, in order to catch them before they can spread. Twitter launched its 'Manipulated Media' policy back in February for this purpose, while Facebook has been looking at ways to advance its own detection models. In line with this, back in September, The Social Network issued a challenge to academic teams to come up with better deepfake detection models which could be used to weed out these videos.
And this week, Facebook has shared the results of its first Deepfake Detection Challenge.
As explained by Facebook:
"The DFDC launched last December, and 2,114 participants submitted more than 35,000 models to the competition. Now that the challenge has concluded, we are sharing details on the results and working with the winners to help them release code for the top-performing detection models."
This is a key point - Facebook, upon working with the winning teams, is looking to share the codebase for each of the winning models, while it's also planning to open source the datasets used, in order to help advance research into deepfakes more broadly.
So how good were the winning models?
The best performing detection models, from the thousands submitted, saw detection rates above 82%. Which is impressive - but that was based on the training set provided, which the researchers could study and refine specifically, focused on those examples.
In order to determine the true accuracy of these systems, Facebook also tested the models on a 'black box' dataset of 10,000 video clips which the participants had not previously seen and had no access to before submitting their code. That altered the final results significantly.
"The highest-performing entrant was a model entered by Selim Seferbekov. It achieved an average precision of 65.18% against the black box data set. Using the public data set, this model had been ranked fourth. Similarly, the other winning models, which were second through fifth when tested against the black box environment, also ranked lower on the public leaderboard. (They were 37th, 6th, 10th and 17th, respectively.)"
As you can see, the results changed a lot when they were applied to videos that the researchers could not train for specifically. That likely shows that there's still a long way to go in establishing a truly accurate deepfake detection system - though a 65% detection rate is still significant, and would likely help to flag many potential concerns within the posting process.
Ideally, however, Facebook can get this number higher, and develop a better system for determining digitally altered videos before they're shared. Because as we've seen, once a video is uploaded online, the fact that it's determined to be fake or edited at a later stage is often too late to stop the damage being caused.
Already, within this US Presidential Election cycle, we've seen several examples of videos being edited or changed in order to emphasize certain elements. There was the controversial Nancy Pelosi video, in which Pelosi appeared to be slurring her words, the Michael Bloomberg video where he pressed other candidates on their business credentials during a debate, and the Joe Biden clip which had been edited to show Biden saying that people should vote for Donald Trump.
These videos were not advanced deepfakes, they all used fairly basic editing techniques. But each of them sparked significant debate, despite them being heavily edited, and proven to be so. Even when they were revealed to be edited, the debates carried on. You can only imagine the damage that a convincing enough deepfake could do within that same process.
And we are indeed likely to find out just how much damage deepfakes can do. As the 2020 US Election race heats up, it seems increasingly likely that, at some stage, a deepfake video of some kind will come into play.
How will that change the race? How will it alter voter behavior? Can digital platforms detect and eliminate such before it takes hold?
It could just be that in the wake of the election, a deepfake video might be the central focus, much like Cambridge Analytica became the target after 2016. Facebook's working to avoid that outcome, and it could end up bring a crucial effort.
You can read more about Facebook's Deepfake Detection Challenge here.