Amid the various social media data scandals and reports of politically-motivated manipulation, this new study shows that users are becoming more wary of content shared online, which could help slow the spread of misinformation.
According to a new Pew Research report, in which they surveyed more than 4,500 US adults, around two-thirds of Americans are now aware of social media bot activity, with a large majority showing concern that bot accounts are being used maliciously.
That's a good thing, because a big part of the social media misinformation loop is that many users have thus far been unaware of how such manipulations occur.
For example, in another Pew Research report (released earlier this year), researchers found that the vast majority of older social media users don’t understand why posts appear in their Facebook News Feeds.
That's important, because if you didn't know that Facebook shows you more of the information you're likely to agree with - and less of what you won't - your perspective would be, understandably, skewed. This new report suggests that digital media literacy may be improving in this regard, fueled by reports of Russian interference in foreign elections and other related controversies.
Pew researchers also found that most people now believe that the news content they do see on social media is influenced by bot activity, and that such actions have a negative impact on information flow.
Again, this is a positive, in that it may lead to more users taking a moment to consider the potential motivations behind a post, particularly a politically motivated one, before hitting 'share'.
Second-guessing a post is often all it takes - a moment of extra research, checking Google, using Facebook's own fact-checking tools. The more users hesitate and consider the logic of such content, the more we may be able to slow the spread of fake news reports, and limit their influence online.
But that said, identifying bots remains a concern.
As noted, Facebook has added new tools to assist in this regard, but identification of bots is not easy. Still, the impetus is there, people are looking at social content with increased skepticism, which, hopefully, forms the basis for more informed information flow online.
This is particularly relevant right now, with Facebook providing even more insight into it's most recent data breach, in which 30 million user accounts were specifically targeted, and removing another network of profiles, Pages and apps linked to a Russian firm - this one developing facial recognition software for the Russian government. Twitter's also under investigation in Europe over the information it tracks via link shorteners.
As more data breaches and potential impacts are uncovered, it's increasingly important that the public considers its own data security, and becomes more aware of how digital platforms can be - and are being - used to target your political vulnerabilities.
Pew's latest report shows that this is happening, that more users are considering what's being shared, and why, which may help improve the accuracy of information shared, and reduce divisive opportunism by politically motivated groups.
You can read the full Pew Research social bots report here.