Last week, Facebook CEO Mark Zuckerberg gave an impassioned speech about his company's stance on free speech, with a particular focus on its decision not to fact-check political ads. That decision, as many have noted, could give a free pass for politicians to campaign on outright lies, without being reigned in - but then again, in order to reign such in, Facebook would have to make some very tough calls on what crosses the line in political speech.
Facebook has chosen not to do that, instead hoping that people will judge for themselves what's true and what isn't in political ads.
As Zuckerberg noted in his speech:
"I believe in giving people a voice because, at the end of the day, I believe in people."
Yet, on the other hand, Zuck and Co. fully acknowledge that people can be, and have been, manipulated by Facebook-hosted content. Which is why this week, Facebook has outlined a range of new measures designed to better protect users for misinformation campaigns and voter manipulation, adding to the already extensive set of processes Facebook has put in place for the same.
So, Facebook trusts people to make an informed decision on who to vote for, even if those candidates campaign on clear lies, but it also wants to ensure that people aren't manipulated by the same, when not shared directly by politicians.
Sounds confusing? It kind of is.
Here's what Facebook has announced:
Improved Detection Measures and Policies
First off, Facebook is updating its inauthentic behavior policy to better clarify how it deals with deceptive practices - "whether foreign or domestic, state or non-state".
This comes after Facebook detected two more clusters of accounts working together to share misinformation, this time originating from Iran and Russia - and most interestingly, the Russian-originated group appears to have stemmed from Russia's Internet Research Agency, a key player in voter manipulation efforts on Facebook back in 2016.
As explained by Facebook:
"This campaign showed some links to the Internet Research Agency (IRA) and had the hallmarks of a well-resourced operation that took consistent operational security steps to conceal their identity and location. They primarily reused content shared across internet services by others, including screenshots of social media posts by news organizations and public figures. A small portion of these accounts also repurposed and modified old memes originally posted by the IRA."
As with the last US Election campaign, the memes shared by these Pages seek to span both sides of the US political divide, with content relating to environmental issues, racial tensions, LGBTQ issues, conservatism and liberalism.

By building a following of Facebook users who support different sides of the political spectrum, the IRA, as we saw in 2016, can then seek to use such to stoke further division by sharing increasingly incendiary content, pushing voters one way or another. They can also then cross-post the various updates into the opposing groups, further solidifying support for either side.
And worth noting too, the accounts also presented themselves as local in some swing states.
It's good to see that Facebook is improving its detection measures on this front, and stopping such content before it has a chance to spread more widely, but it's also concerning that the IRA, and others, are already ramping up their efforts to influence the US election.
Ideally, you would be able to show examples like these to your friends and family and get them to stop sharing political content on Facebook, as it's only going to become more difficult to separate fact from fiction, but given that around 68% of US adults now get at least some of their news coverage from social media - with the majority using Facebook as a source - that's probably not going to happen.
As such, we're reliant on Facebook's measures in this sense, which is why improvements like this are so important.
Facebook Protect
Facebook is also launching 'Facebook Protect', a new process which adds an additional security layer to the accounts of elected officials, candidates, their staff.

As explained by Facebook:
"Beginning today, Page admins can enroll their organization’s Facebook and Instagram accounts in Facebook Protect and invite members of their organization to participate in the program as well. Participants will be required to turn on two-factor authentication, and their accounts will be monitored for hacking, such as login attempts from unusual locations or unverified devices. And, if we discover an attack against one account, we can review and protect other accounts affiliated with that same organization that are enrolled in our program."
While direct hacking of this type hasn't been a source of major issues in western politics, various other nations have seen significant hacking incidents related to political groups. And it can quickly become a major issue, particularly when you have so many group members logging in to Facebook Pages and the like.
Political groups can learn more about Facebook Protect and enroll here.
Adding New Information to Page Transparency Tools
Facebook is also adding another element to its Page Transparency Tools which will enable users to find out which organizations manage a given Facebook Page.

As you can see here, the new section will show who's behind any given Page, which may help users better understand why, exactly, that Page may be sharing certain types of information. This could be particularly beneficial in instances where a Page is sharing politically biased posts, and you want to explain to someone what their motivation for such might be, beyond sheer passion for the cause.
But it won't be available for all Pages straight away:
"Initially, this information will only appear on Pages with large US audiences that have gone through Facebook’s business verification. In addition, Pages that have gone through the new authorization process to run ads about social issues, elections or politics in the US will also have this tab. And starting in January, these advertisers will be required to show their Confirmed Page Owner.
Facebook also notes that if it finds that a Page is concealing its ownership in order to mislead people, it'll require that Page to complete the verification process and show more information - or be taken down and banned from the platform.
Labeling Content from State-Controlled Media
Facebook will also start labeling media outlets which are wholly or partially under the editorial control of their government as state-controlled media.
State-controlled media can be problematic for information flow, as they are able to pick and choose which elements of each story is reported, and which angles the publications take.
"We will hold these Pages to a higher standard of transparency because they combine the opinion-making influence of a media organization with the strategic backing of a state."
Facebook says that it's developed its own definition and standards for state-controlled media organizations "with input from more than 40 experts around the world specializing in media, governance, human rights and development".
This update will not ingratiate Facebook with Chinese officials - though as Zuckerberg noted in his speech last week, Facebook has largely given up on ever making it into China anyway.
For a long time, Facebook was pushing hard to get into the Chinese market - and connect with the nation's 1.4 billion people, with Mark Zuckerberg even giving a speech in Mandarin, and holding various meetings with Chinese officials in an effort to build connections.
But, in his speech last week, Zuckerberg noted that:
"I wanted our services in China because I believe in connecting the whole world and I thought we might help create a more open society. I worked hard to make this happen. But we could never come to agreement on what it would take for us to operate there, and they never let us in. And now we have more freedom to speak out and stand up for the values we believe in and fight for free expression around the world."
No doubt Chinese officials weren't particularly happy with that characterization either, and it's pretty clear now that Facebook and China will never get along.
Facebook says that it'll update its listing of state-controlled media "on a rolling basis" beginning in November.
"In early 2020, we plan to expand our labeling to specific posts and apply these labels on Instagram as well. For any organization that believes we have applied the label in error, there will be an appeals process."
New Political Ads Reporting Options
Facebook is also adding to its political ad reporting tools, with additional spend detail, insights into where each ad has run (e.g. on Facebook, Instagram) and a handy new US presidential candidate spend tracker, so that people can see how much each candidate is spending.

Facebook's also working with researchers to develop a new option which will enable them to quickly download the entire Ad Library, pull daily snapshots and track day-to-day changes.
The increased reporting will no doubt become a key feature in political campaign media in the lead-up to the election - how much the candidates will care that people know what they're spending is another thing altogether, but it will provide some interesting perspective either way.
More Prominent Labels on False Information
Facebook's also making its false information tags on content more prominent, in an effort to slow their spread.

As explained by Facebook:
"The labels above will be shown on top of false and partly false photos and videos, including on top of Stories content on Instagram, and will link out to the assessment from the fact-checker."
So, to clarify, while political ads will not be fact-checked, in contrast, Facebook will make other fact-checked content - not from politicians - harder to ignore. It seems like a contradictory approach, but there is logic to Facebook's method, whether you agree with it or not.
In addition to these new labels, Facebook's also adding a new pop-up which will appear when people attempt to share these posts on Instagram, similar to the same on Facebook.

The new pop-up format includes additional information, and may help to provide more context, and again, slow the spread of false information by prompting users to consider what, exactly, it is that they're re-distributing.
Banning Ads Which Aim to Stop People from Voting
Facebook's also implementing a new, total ban on ads which aim to stop people from heading to the polls.
"In advance of the US 2020 elections, we’re implementing additional policies and expanding our technical capabilities on Facebook and Instagram to protect the integrity of the election. Following up on a commitment we made in the civil rights audit report released in June, we have now implemented our policy banning paid advertising that suggests voting is useless or meaningless, or advises people not to vote."
Social media has become a key medium for this type of manipulation - ahead of the 2018 Mid Terms, for example, a rumor was spread through social networks which suggested that federal immigration agents might be stationed at polling places across the country to check voters' citizenship statuses.
That prompted this official response from the agency:
ICE does not patrol or conduct enforcement operations at polling locations. Any flyers or advertisements claiming otherwise are false. pic.twitter.com/OiTdD5tVCA
— ICE (@ICEgov) October 24, 2018
Facebook actually updated its policies to combat the same back in 2018, with a specific focus on content which misrepresented how people could participate in the vote. This new measure will provide further capacity for Facebook to crack down on the same.
Digital Literacy Education
And lastly, Facebook has committed $2 million in support of digital literacy projects, which aim to empower people to determine what is, and is not, trustworthy information online.
This is a key area of concern - a recent study by Pew Research highlighted significant gaps in digital literacy, including the fact that 71% of people are not aware that Facebook owns Instagram and WhatsApp. That's a more industry-specific factoid, not related to news content as such, but it underlines the knowledge gap, and the lack of understanding behind the potential motivations behind online sources for the information they share.
Facebook has actually been working to provide digital literacy education for some time - last November, The Social Network announced a range of new digital education initiatives to help users better understand the digital realm, and how they participate within it.
"These projects range from training programs to help ensure the largest Instagram accounts have the resources they need to reduce the spread of misinformation, to expanding a pilot program that brings together senior citizens and high school students to learn about online safety and media literacy, to public events in local venues like bookstores, community centers and libraries in cities across the country. We’re also supporting a series of training events focused on critical thinking among first-time voters."
Make no mistake, digital literacy has become a key skill, and something that should be added to the curriculum in early childhood education across the board - as more of our interactions and media inputs shift online, it's becoming more and more important that we ensure that there's clearer understanding of how such systems work, including the motivations behind such, the ways in which you can fact-check content, and protecting your own information.
As you can see, there's a heap to take in here - Facebook is going all out to show that, while it won't be fact-checking political ads, it is doing lots of other things to protect users from manipulation. In this sense, allowing clear lies in political ads does seem to go against the grain, but Facebook's stance is that it cannot be the referee in political debate. The people need to decide, based on the information presented to them by the candidates.
How that'll work out in practice remains to be seen - but it's interesting, in light of all of these changes, to consider just how far we've come from Zuckerberg's initial response to the suggestion that fake news content shared on Facebook could have influenced the outcome of the 2016 US Presidential election.
As per Zuckerberg, (in November 2016)
"Personally I think the idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way - I think is a pretty crazy idea. Voters make decisions based on their lived experience.”
In many ways, this was a massively flawed assessment, yet in others, it still feels like Zuck holds onto that belief just a little more than he should.