So what have we learned from the latest disclosure of internal Facebook documents and research?
Well, not a lot, really. Former Facebook engineer Frances Haugen released an initial set of internal reports from The Social Network last month, which outlined various concerns, including its struggles in handling anti-vaccine content, the harmful impacts of its algorithm changes, and the negative mental health effects of Instagram on teens.
Haugen released another cluster of reports this week, via a coordinated effort with various major publications, which expand on these initial claims, and add more detail on various aspects. And all of it is interesting, no doubt, all of it shines light on what Facebook knows about its systems and how they can sow division and angst, and their broader societal impacts. But the revelations, also, largely underline what we already knew or suspected. That Facebook’s lack of local language support has lead to increased harm in some regions, that its network is used for criminal activity, including human trafficking, and that Facebook may have prioritized growth over safety in some decision making.
All of this was largely known, but the fact that Facebook also knows, and that its own research confirms such, is significant, and will lead to a whole new range of actions taken against The Social Network, in varying form.
But there are some other valuable notes that we weren’t aware of which are hidden among the thousands of pages of internal research insights.
One key element, highlighted by journalist Alex Kantrowitz, relates to the controversial News Feed algorithm specifically, and how Facebook has worked to balance concerns with content amplification through various experiments.
The main solution pushed by Haugen in her initial speech to congress about the Facebook Files leak is that social networks should be forced to stop using engagement-based algorithms altogether, via reforms to Section 230 laws, which, in Haugen’s view, would change the incentives for social platform engagement, and reduce the harms caused by their systems.
As explained by Haugen:
“If we had appropriate oversight, or if we reformed [Section] 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking.”
But would that work?
As reported by Kantrowitz, Facebook actually conducted an experiment to find out:
“In February 2018, a Facebook researcher all but shut off the News Feed ranking algorithm for .05% of Facebook users. “What happens if we delete ranked News Feed?” they asked in an internal report summing up the experiment. Their findings: Without a News Feed algorithm, engagement on Facebook drops significantly, people hide 50% more posts, content from Facebook Groups rises to the top, and - surprisingly - Facebook makes even more money from users scrolling through the News Feed.”
The experiment showed that without the algorithm to rank content based on various different factors, users spent more time scrolling to find relevant posts, exposing them to more ads, while they ended up hiding a lot more content - which, when you’re looking at a chronological feed, doesn’t have the ongoing benefit of reducing the likelihood of you seeing more of the same in future. Groups content rose because users are more engaged in groups (i.e. every time someone posts an update in a group that you’re a member of, you could be shown that in your feed), while far more of your friends’ comments and likes lead to Page posts appearing in user feeds.
So a negative overall, and not the solution that some have touted. Of course, part of this is also based on habitual behavior, in that, eventually, users would likely stop following certain Pages and people who post a lot, they’d leave certain groups that they’re not so interested in, and they’d learn new ways to control their feed. But that’s a lot of manual effort on the part of Facebook users, and Facebook engagement would suffer because of it.
You can see why Facebook would be hesitant to take up this option, while the evidence here doesn’t necessarily point to the feed being less divisive as a result. And this is before you take into account that scammers and Pages would learn how to game this system too.
It’s an interesting insight into a key element of the broader debate around Facebook’s impact, with the algorithm often being identified as the thing that has the most negative impact, by focusing on content that sparks engagement (i.e. argument) in order to keep people on platform for longer.
Is that true? I mean, there’s clearly a case to be made that Facebook’s systems do optimize for content that’s likely to get users posting, and the best way to trigger response is through emotional reaction, with anger and joy being the strongest motivators. It seems likely, then, that Facebook’s algorithms, whether intentionally so or not, do amplify argumentative posts, which can boost division. But the alternate may not be much better.
So what’s the best way forward?
That’s the key element that we need to focus on now. While these internal insights shine more light on what Facebook knows, and its broader impacts, it’s important to also consider what the next steps may be, and how we can implement better safeguards and processes to improve social media engagement.
Which Facebook is trying to do – as Facebook CEO Mark Zuckerberg noted in response to the initial Facebook Files leak.
“If we wanted to ignore research, why would we create an industry-leading research program to understand these important issues in the first place? If we didn't care about fighting harmful content, then why would we employ so many more people dedicated to this than any other company in our space - even ones larger than us?”
Facebook clearly is looking into these elements. The concern then comes down to where its motivations truly lie, but also, as per this experiment, what can be done to fix it. Because removing Facebook entirely isn’t going to happen – so what are the ways that we can look to use these insights to build a safer, more open, less divisive public forum?
That’s a far more difficult question to answer, and a more deeply reflective concern than a lot of the hyperbolic reporting around Facebook being the bad guy.