Facebook has announced a new research partnership that will examine the impact of Facebook and Instagram on key political attitudes and behaviors in the lead-up to the 2020 US Presidential election.
As per Facebook:
"Building on the initiative we launched in 2018, [the project] will examine the impact of how people interact with our products, including content shared in News Feed and across Instagram, and the role of features like content ranking systems."
That 2018 project also sought to examine the role that social platforms now play in political discourse, by looking, specifically, at key sharing trends around the 2016 Election. The project was announced in the wake of the controversy around Cambridge Analytica, with academic Gary King proposing a better way for Facebook to share its data, without compromising user privacy. But the project ended up running into several roadblocks, and thus far, has not been able to establish a better way to enable improved data analysis without encroaching on privacy.
Which is where this new project comes in.
As per Facebook:
"We are asking for the explicit, informed consent from those who opt to be part of research that analyzes individual level data. This means research participants will confirm both the use of their data and that they understand how and why their data will be used. Additionally, as part of our studies, we will also analyze aggregated user data on Facebook and Instagram to help us understand patterns. In addition to this, the studies – and our consent language – were reviewed and approved by an Institutional Review Board (IRB) to ensure they adhere to high ethical standards."
Really, studying the 2020 Election seems like a defeat in itself - why not study the 2016 election trends and get the insights before we head into the next election cycle? But the complications with user privacy, made even more complex by changes following Cambridge Analytica, are why Facebook has to essentially start over again. With the permission of participants, it can actually provide full insights and data on things like broad sharing trends and shifts, establishing which groups, for example, are more likely to share misinformation, among other things.
That could provide key insight into how, exactly, Facebook exacerbates or diminishes such trends, which can then lead to more informed policy approaches and discussion around how they can be managed in future.
If 'managed' is the right term in this context.
It could be a hugely beneficial project, and the approach, in variance to the 2018 model, should lead to more insight, and more value from the information provided. Facebook has committed to a range of independence and transparency principles around the study, so we should also have full, unedited data to go on. Which could be great for future elections - but it also means that, in 2020, we'll have to live with the influence of Facebook as it stands. Be that as it may.
But hopefully, as Facebook notes, its increased efforts on election security will lessen any negative impacts this time around too:
"There are now three times as many people working on safety and security issues, more than 35,000 in total, and we work closely with government and law enforcement. Facebook has helped fight interference in more than 200 elections since 2017 and reduced fake news on its platform by more than 50%, according to independent studies."
So while this potentially valuable research will come after the US Election (Facebook expects the initial papers to be made available in mid-2021, at the earliest), Facebook has implemented a range of other measures that could/should help.
It still won't fact-check political ads, and it still leans towards allowing politicians to share what they want, as opposed to policing what they say. But hopefully, through this research, we'll get more context as to how those rulings impact political behavior, which could, eventually, lead to a more comprehensive and balanced approach to such moving forward.
You can read more about the project here.