Facebook CEO Mark Zuckerberg has posted a new conversation with author and historian Yuval Noah Harari, in which the two discuss ethics in the digital age, and the challenges specific to Facebook in ensuring that its platform is used for societal good, and not to increase divides.
The discussion looks at some of Facebook's key challenges from a high level, and within a historical context. There are some fascinating consideration points in here, but one note, in particular, stands out as key to the fundamental challenge confronting The Social Network.
On the spread of news and information via the News Feed, Harari argues that the algorithm sorting process may not necessarily be beneficial for broader society, because while it does show people more of what they like and want to see, that's not always beneficial for broader awareness and understanding.
Harari notes that Facebook's aim is to show people more of what makes them feel good, in order to keep them coming back to the platform - but that's not necessarily a good thing within itself.
"People that feel good about themselves have done some of the most terrible things in human history. I mean, we shouldn’t confuse people feeling good about themselves and about their lives with people being benevolent and kind and so forth."
This is a key point - while it may make you happier, and more aligned with the platform to read more things that reinforce your established point of view, that can also further entrench isolated perspective, and solidify movements without providing a counter - which may actually cause more societal division in the process.
Zuckerberg does note that Facebook isn't entirely reliant on machine learning of this kind to build its systems, that there is an inherent level of humanity built in, and that adds more balance to the calculation:
"We bring in real people to tell us what their real experience is in words, right? Not just kind of filling out scores, but also telling us what were the most meaningful experiences you had today, what content was the most important, what interaction did you have with a friend that mattered to you the most and was that connected to something that we did? And, if not, then we go and try to do the work to try to figure out how we can facilitate that."
Zuckerberg also argues that Facebook isn't only driven by engagement - and profit - in this regard, using an example of a recent decision to reduce the spread of viral videos.
"Last year on one our earnings calls, I told investors that we’d actually reduced the amount of video watching that quarter by 50 million hours a day, because we wanted to take down the amount of viral videos that people were seeing, because we thought that that was displacing more meaningful interactions that people were having with other people, which, in the near-term, might have a short-term impact on the business for that quarter, but, over the long term, would be more positive both for how people feel about the product and for the business."
Theoretically, such actions, according to Zuckerberg, should better angle the algorithms towards distributing 'meaningful', beneficial content. But that still doesn't address the core concern of polarization based on user bias.
That's where Harari makes a key, resonant point on Facebook's distribution systems and relative influence.
"Ultimately, what I’m hearing from you and from many other people when I have these discussions, is ultimately the customer is always right, the voter knows best, people know deep down, people know what is good for them. People make a choice: If they choose to do it, then it’s good. And that has been the bedrock of, at least, Western democracies for centuries, for generations. And this is now where the big question mark is: Is it still true in a world where we have the technology to hack human beings and manipulate them like never before that the customer is always right, that the voter knows best? Or have we gone past this point? And we can know – and the simple, ultimate answer that “Well, this is what people want,” and “they know what’s good for them,” maybe it’s no longer the case."
This is a critical consideration for Facebook, and any platform utilizing an algorithm-defined system to guide engagement. Is showing people more of what they like and agree with actually beneficial, or does inherently work to reinforce niche bias and solidify division through tribalism?
We're seeing more and more examples of what were once smaller movements gaining momentum in the modern age - consider the rise of anti-vaxxers, flat-earthers and the like, and how prominent they now are in your consciousness. That wasn't always the case - could it be that these groups are being boosted by digital systems which show users more of what they'll agree with, and less of what they won't, essentially reinforcing such skewed beliefs?
And if that is the case, how do you counter it - and as Harari argues, how do you ensure the same is not used by groups with ill-intent to manipulate our wider consciousness?
"To what extent you can really trust that the thought that just popped up in your mind is the result of some free will and not the result of an extremely powerful algorithm that understands what’s happening inside you and knows how to push the buttons and press the levers and is serving some external entity and it has planted this thought or this desire that we now express?"
Zuckerberg counters this by noting that, in his view, people don't trust things that they don't believe.
"I think people really don’t like and are very distrustful when they feel like they’re being told what to do or just have a single option. One of the big questions that we’ve studied is how to address when there’s a hoax or clear misinformation. And the most obvious thing that it would seem like you’d do intuitively is tell people, “Hey, this seems like it’s wrong. Here is the other point of view that is right,” or, at least, if it’s a polarized thing, even if it’s not clear what’s wrong and what’s right, “here’s the other point of view,” on any given issue. And that really doesn’t work, right? So, what ends up happening is if you tell people that something is false, but they believe it, then they just end up not trusting you."
What's most interesting in the interview is that Zuckderberg's optimism remains a key point of contention. Through all of the various Facebook scandals in recent times, the image that has repeatedly come to light is that Facebook, as a company, is not necessarily malicious, and is not necessarily seeking to maximize revenue at all costs. But that Facebook's team does lean too far on the side of optimism, often ignoring the potential dangers, or simply failing to recognize them at all because of this.
In Zuckerberg's comments here, he explains how Facebook, and social media more broadly, has been great for bringing people together, for facilitating connection between individuals with niche interests that they would not have been able to find in their physical world, and within their geographic limitations. Facebook, Zuckerberg argues, is beneficial because of this - but again, that overlooks the fact that the same also applies for people with niche interests which are not beneficial for society, that lead to further division.
Does the good outweigh the bad? In some ways, the argument is irrelevant - this is the reality of the world we live in now, and digital connection is a part of that. But the evidence would suggest that while the connective capacity of social platforms can be hugely beneficial, the opposite is also true. Zuckerberg appears to be slowly coming to this realization, that not all people are looking to use such tools for 'good' purpose. But he also still seems to be far from convinced of such impacts, which may mean that Facebook, more broadly, is still leaning too far to the optimistic side of such discussion.