As debate around what should and should not be allowed on social media platforms ramps up ahead of the 2020 US Presidential Election, a new report has provided some insight into how Americans, in general, feel about free speech online, and who, ultimately, should be in charge of policing such.
The report, conducted by Gallup and The Knight Foundation, incorporates responses from over 3,000 survey participants - though it is worth noting that the surveys were conducted in December 2019, before the latest back and forth between US President Donald Trump and social platforms over Section 230 protections.
That may actually prove more indicative, as it would reduce the heightened emotional response around the same. Here's what the responses indicate, based on various key elements.
First off, most Americans support free-speech on social platforms, even if they don't agree with those viewpoints.
As per the report:
"Nearly two-thirds of Americans (65%) favor allowing people to express their views on social media, including views that are offensive, over restricting what people can say on social media based on societal norms or standards of what is fair or appropriate (35%)."
So most believe that people should be free to say what they want. Though even within that, there are limits.
Almost all respondents indicated that child pornography should never be allowed on social media, while 85% said that misleading health information also should be prohibited.
So while the majority believe in freedom of speech as a principle, in practice, most also understand the dangers and harms of such, and agree that there needs to be parameters around what's allowed.
But who decides on that? Who do people believe should be making the call on what's acceptable and what crosses the line?
This is the key question at the core of the current Section 230 debate - and on 230 specifically, respondents were split.
As you can see here, 54% of respondents say that Section 230 laws have done more harm than good, because they have not made social platforms accountable for illegal content on their sites and apps.
Though as recently noted by legal expert Jeff Kossef, there is still a level of confusion as to how Section 230 laws operate, and what they do and do not cover in terms of social platform liability:
"There is a huge misconception that Section 230 protections disappear if a platform moderates content. Congress passed 230 to prevent platforms from increasing their liability due to editing user content. Yet this misconception has persisted for years, and has shaped some websites' hands-off moderation practices. If they start to "edit" user content, they fear, they will lose Section 230 protections. Again, this is absolutely false."
That noted, the principle that most people are responding to in this survey is whether social platforms should or should not be protected by law in regards to the content they host, even if it's posted by users. A slim majority, as indicated, think that platforms are too protected, which lessens the impetus on them to properly police dangerous content.
But it's quite a conflict, isn't it? As noted in the top response, the majority of people believe that social media users should be free to say what they like, yet the vast majority also agree that some content is off-limits, even within that consideration.
The responses underline the ongoing challenge faced by social platforms, which has lead to some adding warning labels and other measures, while others take a more hands-off approach.
Which is the right one? Based on these responses, the public don't seem to be able to come to any clear consensus.
But they do know that they don't trust the platforms themselves to make rulings:
So where does that leave us?
Interestingly, the researchers also asked respondents how they feel about Facebook's new approach, which will see the implementation of an independent Content Oversight Board to rule on difficult content decisions. The Content Oversight Board will include experts from a range of fields and backgrounds, ensuring that various perspectives are taken into account.
And while respondents, initially, didn't seem overly convinced by this approach, after learning more about how the board is intended to function, the majority were in support.
As you can see here, the initial response, without learning about how the system will work, was negative, but having been given more information, that changed significantly.
"More than 8 in 10 Americans say they think a content oversight board is a “good idea” (54%) or “very good idea” (27%), while 12% say it is a “bad idea,” and 7% say it’s a “very bad idea.”
Maybe, then, that is the key, and Facebook is leading the way with its Content Oversight Board approach - which, unfortunately, won't be in a position to implement any significant change ahead of the 2020 Election.
But it could be the way forward. Amid confusion around Section 230, and attempts to reform such laws, maybe the key is to take the decision-making out of the hands of the platforms themselves, and ensure that trusted, independent groups are consulted on any policy changes.
We won't know how this works, of course, until Facebook's Content Oversight board begins, but of all the various scenarios represented in this dataset, it's the only one that seems to have any real support.
You can read the full "Free Expression, Harmful Speech and Censorship in a Digital World" report here.