EU officials certainly seem keen to enforce the obligations of their new Digital Services Act, with new reports that the EU has launched an official investigation into X over how it’s facilitated the distribution of “graphic illegal content and disinformation” linked to Hamas' attack on Israel over the weekend.
Various reports have indicated that X’s new, more streamlined, more tolerant approach to content moderation is failing to stop the spread of harmful content, and now, the EU is taking further action, which could eventually result in significant fines and other penalties for the app.
The EU’s Internal Market Commissioner Thierry Breton issued a warning to X owner Elon Musk earlier in the week, calling on Musk to personally ensure that the platform’s systems are effective in dealing with misinformation and hate speech in the app.
Musk responded by asking Breton to provide specific examples of violations, though X CEO Linda Yaccarino then followed up with a more detailed overview of the actions that X has taken to manage the rise in related discussion.
Though that may not be enough.
According to data published by The Wall Street Journal:
"X reported an average of about 8,900 moderation decisions a day in the three days before and after the attack, compared with 415,000 a day for Facebook"
At first blush that seems to make some sense, given the comparative variance in user numbers for each app (Facebook has 2.06 billion daily active users, versus X’s 253 million). But broken down more specifically, the numbers show that Facebook is actioning almost six times more reports, on average, than X, so even with the audience variation in mind, Meta is taking a lot more action, which includes addressing misinformation around the Israel-Hamas war.
So why such a big difference?
In part, this is likely due to X putting more reliance on its Community Notes crowd-sourced fact-checking feature, which enables the people who actually use the app to moderate the content that’s shown for themselves.
Yaccarino noted this in her letter to Breton, explaining that:
“More than 700 unique notes related to the attacks and unfolding events are showing on X. As a result of our new “notes on media” feature, these notes display on an additional 5000+ posts that contain matching images or videos.”
Yaccarino also said that Community Notes related to the attack have already been viewed “tens of millions of times”, and in combination, X is clearly hoping that Community Notes will make up for any shortfall in moderation resources as a result of its recent cost-cutting efforts.
But as many have explained, the Community Notes process is flawed, with the majority of notes that are submitted never actually being displayed to users, especially around divisive topics.
Because Community Notes require consensus from people of opposing political viewpoints in order to be approved, the contextual pointers are often left in review, never to see the light of day. That means for things that are in general agreement, like AI-generated images, Community Notes are helpful, but for topics that spark dispute, they’re not overly effective.
In the case of the Israel-Hamas war, that could also be an impediment, with the numbers also suggesting that X is likely putting too much reliance on volunteer moderators for key concerns like terrorism-related content and organized manipulation.
Indeed, third party analysis has also indicated that coordinated groups are already looking to seed partisan information about the war, while X’s new “freedom of speech, not reach” approach has also led to more offensive, disturbing content being left active in the app, despite it essentially promoting terrorist activity.
X’s view is that users can choose not to see such content, by updating their personal settings. But if posters also fail to tag such in their uploads, then the system is also seemingly falling short.
Given all of these considerations, it’ll be interesting to see how EU regulators proceed with this action, and whether it does find that X’s new systems are adequately addressing these elements through moderation and mitigation processes.
Essentially, we don’t know how significant this issue is, but external analysis, based on user reports, and accessible data from X, will provide more insight, which could see X put under more pressure to police rule-breaking content in the app.