Since the 2016 US Presidential Election, the social media landscape has changed significantly - though maybe not in ways many have noticed.
The election, and the subsequent revelations of voter manipulation via social platforms, cast social media in a new light - no longer could it be dismissed as a frivolous, pop culture distributor, mostly populated by kids. As many in businesses had already worked out, social media is actually a hugely influential force, and one that, indeed, now has the power to credibly impact the outcome of elections.
So what could the platforms do to stop it?
They have done a lot - Facebook, for example, has introduced third-party fact-checking, new labels and requirements for 'issues' ads and political candidates, user ratings to better sort false news reports, and - one of the most helpful tools - Page info and ad insights, which let users know things like where the Page's managers are located, and what other names the Page might have had.

Twitter, too, has ramped up its efforts, implementing new API restrictions to limit mass actions (like following and interacting with tweets) and introducing its own badges and tools to provide more transparency around political content. Twitter has also been removing bots and fake profiles at a higher rate than ever,

But are those efforts working - are social platforms still facilitating the spread of fake news, and has it been lessened by such additions as we head towards the US Midterms?
Apparently not.
According to a new report from Knight Foundation a huge amount of fake news activity is still present on Twitter.
The group analyzed more 10 million tweets from 700,000 Twitter accounts which had linked to more than 600 fake and conspiracy news outlets. They found that in the lead-up to the 2016 US Presidential election, more than 6.6 million tweets linked to fake news and conspiracy news publishers, a problem which continued after, with 4 million tweets to fake and conspiracy news publishers found from mid-March to mid-April 2017.
And now:
"More than 80% of accounts that repeatedly spread misinformation during the 2016 election campaign are still active, and they continue to publish more than a million tweets on a typical day."
That's a lot - last year, UK researchers uncovered huge bot networks operating via tweet, with clusters of up to 500,000 accounts that operated in coordinated action to undertake sharing and engagement processes. Have they been removed? While Twitter, as highlighted in the above stats, has been stopping more accounts from signing-up, could it be that existing bot networks like this are still active?
This is obviously a major concern - social platforms, as noted, now have huge influence, with Pew Research reporting just last month that some 68% of American adults now get news content via social platforms.

Though you will note, as per the bottom chart, skepticism is growing - and that's a good thing, because users have the tools to clarify misinformation at their disposal, using the same devices that they're posting and re-sharing such content from. All it takes is a moment of second-guessing, a moment of inquiry via web research and many fake news reports could easily be dispelled before they're shared - which is what Facebook is trying to promote with tools like 'related articles' which highlight other coverage of the same story when users go to share.
But digital literacy does remain a problem, as does the psychological impulse to share content which supports your established viewpoint.
Here's an example - this post has been floating around Facebook recently, sparking anger and fueling religious hate and division.

As you can see, the video has amassed more than 1.1 million views in a week, mostly accompanied by comments like the above.
But that anger is misplaced - this is not a video of a Muslim refugee ruining a religious statue in Italy, it's an incident that happened in Algeria late last year. The man attacked the statue on the Ain El Fouara fountain because it depicts a naked woman, which he believes is indecent - the same statue has been vandalized several times for the same reason, as Algeria is a majority Muslim nation, and many see the depiction as distasteful.
Has that stopped this being shared massively? Have Facebook's efforts to clean up the platform of fake news like this helped?
Again, part of the issue is digital literacy - it's not difficult to work out that the video caption is incorrect if you look through the comments and/or search for the original video online. But many won't do that - many will be angered by the headline and what the video appears to represent. Some might go looking for 'statue attacked in Italy' and they'd get no results (because it wasn't in Italy), which would confirm, in their mind, that this is part of a media conspiracy, a cover-up - 'why aren't the mainstream outlets reporting this?'
In their minds, this is because the media is censoring certain elements - but in reality, it's because it didn't happen, at least not how they now think.
You can see, then, how fake news is still able to run rampant, because people lack the capability or motivation to dispell such content. That's why advanced efforts, like Facebook's new process to fact-check visual content, are important, because you can't rely on people to question everything, or to clarify things that might seem out of place. Because they won't - you might look at this example and immediately think that it seems a bit off, but clearly many users don't have that same impulse.
The new Twitter report, and examples like this, show that we're still a long way off eliminating fake news as an influential factor, it's still flowing through social and being shared by people to push certain agendas or beliefs. Maybe, by educating more users on how to use newer fact-checking tools and options, we can start to move towards more informed digital media consumption, but it's a problem that's apparently not been lessened as yet.
If you thought the Midterms would be free - or at least, more free - of manipulation, you may be in for disappointment.