Twitter continues to advance its efforts to essentially ‘clean-up’ its platform, this time by focusing on misuse of the platform’s APIs, which provide broad-scale tweet access.
As explained by Twitter:
“…we’re committed to providing access to our platform to developers whose products and services make Twitter a better place. However, recognizing the challenges facing Twitter and the public - from spam and malicious automation to surveillance and invasions of privacy - we’re taking additional steps to ensure that our developer platform works in service of the overall health of conversation on Twitter.”
To this end, Twitter has announced new requirements for developers looking to access its APIs in order to “increase accountability for apps creating and engaging with content and accounts on Twitter at high volumes”.
Accessing Twitter’s APIs enables developers to conduct wide-scale actions on the platform, including everything from mass-tweeting and analysis of profiles to ingesting feeds of all tweets over a given time, helping to detect trends and fuel studies.
First off, from this week, anyone seeking to access to Twitter’s APIs will need to apply for a developer account using Twitter’s developer portal.
Previously, apps could be managed on Twitter’s app platform also, but this new process will push all requests through the platform’s developer process, enabling Twitter to better vet access.
Twitter’s also putting new limits on the default number of apps which can be registered by a single developer account to 10, while its also adding app-level rate limits which will apply to all requests to use Tweets, retweets, likes, follows, or Direct Messages via API.
The new default limits, which will come into effect from September 10th, are:
- Tweets & Retweets (combined): 300 per 3 hours
- Likes: 1000 per 24 hours
- Follows: 1000 per 24 hours
- Direct Messages: 15,000 per 24 hours
The new restrictions, while only applicable to larger scale Twitter research and developer users, are another step towards limiting the amount of spam and misuse of the platform.
As noted by Twitter:
“We do not tolerate the use of our APIs to produce spam, manipulate conversations, or invade the privacy of people using Twitter. Between April and June 2018, we removed more than 143,000 apps which violated our policies, and we’re continuing to invest in building out improved tools and processes to help us stop malicious apps faster and more efficiently.”
In combination, the efforts underline Twitter’s renewed focus on improving the accuracy of its platform data, and ensuring that information, like the follower count figure, is meaningful – and not merely loaded with fakes.
Twitter still has a long way to go on this front, but this latest update shows that not only are the platforms themselves taking the data they provide more seriously, but also, that social platforms themselves have now become a more legitimate proxy for real-world activity.
With people using social data to reinforce their thinking and approach on any range of processes, it’s important that the numbers they’re basing their decisions on are real, and represent real people – not bots or spammers looking to cheat the system.
For example, rather than fighting misinformation by seeking to disprove it, Twitter could reduce the spread of such content on its platform by removing bots which mass-share such content, boosting re-tweet counts which add an element of ‘social proof’ and help to legitimize such claims.
Indeed, in the wash-up from the 2016 US Presidential Election, researchers uncovered what they claimed were huge, inter-connected Twitter bot networks, with the largest incorporating some 500,000 fake accounts. Such networks became a focus after reports suggested that Donald Trump was benefiting from bots which were retweeting pro-Trump messages, thereby increasing his share of voice and boosting his messages over Hillary Clinton.
As reported by Recode:
"During the third presidential debate, Twitter bots sharing pro-Trump-related content outnumbered pro-Clinton bots by 7 to 1. And in the span between the first and second debates, more than a third of pro-Trump tweets were generated by bots, compared with a fifth for pro-Clinton tweets."
Given the real-time nature of Twitter, adding flags to suspect stories and providing links for further context – as has been added to Facebook – is likely not an option, but eliminating bots and misuse could be just as effective in stopping the spread of such material, while also making Twitter’s numbers more accurate and accountable overall.
The changes to Twitter’s API limits won’t affect the everyday user, but they're part of a broader shift which could have a significant impact.