Twitter Has a Big Problem - But Can it be Fixed?
Twitter has a problem. Well, Twitter has a few issues - but the platform has a particularly large problem in its facilitation of the spread of fake news, with a new study showing that false reports spread much further and faster on the platform than truth.
The report, conducted by researchers from MIT, analyzed a huge range of contested news stories across the span of Twitter’s existence - over 126,000 stories, tweeted by 3 million users, over more than 10 years. The research showed that people are far more inclined to re-tweet and share false reports than true ones - as explained in The Atlantic:
“By every common metric, falsehood consistently dominates the truth on Twitter, the study finds: Fake news and false rumors reach more people, penetrate deeper into the social network, and spread much faster than accurate stories.”
That’s obviously a significant concern, particularly in light of the ongoing revelations about how foreign groups have sought to influence the outcome of elections in different regions. The data shows that Twitter could not only play a part in this, but also that people actively seek to do so, with fake reports seemingly proving more alluring than facts for active tweeters.
And the blame can’t be put on bots either:
“From 2006 to 2016, Twitter bots amplified true stories as much as they amplified false ones, the study found. Fake news prospers, the authors write, “because humans, not robots, are more likely to spread it.”
The study does, as some experts have noted, miss out on the more strategic use of bots in recent times (which was outside of the researchers’ frame of reference), but still, the data suggests that people are to blame, that real users are more inclined to spread rumors and fake reports via tweet than facts.
So what does that mean for Twitter?
For their part, Twitter was actively involved in the study, and this week, Twitter CEO Jack Dorsey and other leaders in the company held a Periscope live-stream to discuss the findings, the state of the platform, and what they’re doing to address such issues, among other concerns.
Among the possible solutions? Opening up verification to all users.
Such a move has been suggested before – back in 2016, Twitter opened up the verification application process to all users, enabling anyone to apply for the blue checkmark. Twitter says that the idea behind this was to use verification as a means of identification, not endorsement, but there's been confusion – both internally and externally – around the exact parameters and what verification means.
Twitter acknowledged this last November, when they also announced that they were “pausing” verification after controversy surrounding their approval of certain users.
Verification was meant to authenticate identity & voice but it is interpreted as an endorsement or an indicator of importance. We recognize that we have created this confusion and need to resolve it. We have paused all general verifications while we work and will report back soon— Twitter Support (@TwitterSupport) November 9, 2017
In the live-stream, Dorsey noted that:
“The intention is to open verification to everyone, and to do it in a way that’s scalable, where [we’re] not in the way, and people can verify more facts about themselves - and we don’t have to be the judge or imply any bias on our part.”
The approach of verifying users based on identity could help reduce the distribution of false news content by linking those actions to real-world identities. As noted by TechCrunch’s Josh Constine in a post on the challenges of free speech, if platforms were to require some form of validation, and connection to a real person (by, for example, requiring a phone number), that would potentially change the way people use them, and reduce misuse, because perpetrators could be more easily identified, as opposed to remaining anonymous.
Of course, there are challenges to that too – Dorsey also acknowledged that anonymity is, in some cases, important, noting that Twitter doesn’t currently enforce a real name policy because he wants the platform to remain a safe space for someone to speak their mind without sharing identifiable details.
And even if real world identities were required, the MIT study shows that sharing rumor and false narratives may just be more appealing – people might like sharing such stories because they’re simply more interesting than the truth, and inspire more interaction.
You’d expect that implementing some form of real world identification would reduce this - but then again, maybe it wouldn’t. Maybe, this is just human nature, and giving everyone a platform to share their thoughts and opinions actually just amplifies an undercurrent that’s long existed, and has thus far been quelled because our media inputs have been controlled by ‘truth keepers’ of sorts in the form of actual journalists and news organizations.
Thus, over time, the quality of journalism and media representation has also bent to the whim of social networks and online ad dollars, with more publications printing more controversial and divisive headlines, leading to the rise of click-bait and partisan coverage.
You’d like to hope that this isn’t the case, that fake news is being pushed by less reputable sources, and that by identifying them we could restore some level of balance. But maybe not.
As noted by one of the report’s authors, Deb Roy:
“Polarization has turned out to be a great business model.”
That statement, for better or worse, may best sum up the current state of online media.
NOTE: One concession of the MIT report worth noting is that the frame of reference may actually limit the findings, as the stories analyzed didn’t include all stories – so a truthful story may have been shared more than it seemed on a broader scale, but because it wasn’t in question, it wasn’t measured. That may lessen the findings on the comparative inclination to share fake reports, though the data still highlights a significant concern among disputed or fake information.
Follow Andrew Hutchinson on Twitter