One of the more prescient point in Naomi Klein's book "No Logo" was that, as more and more public space becomes privatized, the number of places where we have access to our full rights of free speech and expression decreases. They are inversely correlated, basically. Klein was mostly talking about physical spaces, like how, when shopping malls become our hang outs where we spend a lot of time, we lose rights, because shopping malls, being privately owned, can kick us out any time they want, especially if we disrupt the flow of commerce.
Now, over the last fifteen years or so, we've moved our "public" discussions almost entirely into private spaces. Forgive the generalization, but the places where we talk about things are now all online, and privately owned. It may seem like you can say almost anything you want on Facebook or Twitter without being censored, but that's only because of the restraint of their owners. If they wanted to, they could censor anything they wanted. And it would be their right to.
So it is with ominous background music that Eric E. Schmidt, Executive Chairman of Google, discusses, in a New York Times editorial, what to do about the growing use of social media and the internet by extremist and terrorist groups to disseminate their message and propaganda. Schmidt is even-handed about the impact of the internet as a whole, saying that "it has created friendships, strengthened connections and fulfilled dreams for billions of people around the world," while noting how it has been misused by those with evil intent.
The phrase in the editorial that everyone has fixated on, however, is this one: "We should build tools to help de-escalate tensions on social media - sort of like spell-checkers, but for hate and harassment." Many have taken this to mean that programs and algorithms should be employed to detect and delete extremist propaganda and accounts automatically, before they have had the opportunity to have their message spread. This is a very nice sentiment, but, when it comes to censorship, automation is the tip of the sword.
To use Schmidt's word: Ever notice how sometimes spell-check marks real names as misspelled? Or place names? It often tells you to fix things that don't need fixing. The problem is that algorithms are indifferent. They recognize certain phrases, a certain vocabulary, identify the text as having certain content, and act accordingly, either to, say, promote a post in a Facebook News Feed or, in that case of Schmidt's suggestion, block extremism.
But what if you're simply debating some aspect of that extremism online? Algorithms have a difficult time teasing out the nuances of context. Will they be able to tell the difference between actual propaganda and a joke about propaganda? Maybe algorithms and programs will be developed to the point that they can do that, but right now, I have deep doubts that such a thing is possible.
Should ISIS be able to recruit or disseminate propaganda through social media? No, of course not. But that's the obvious example. Nobody likes the Westboro Baptist Church. Duh. But what about speech that, while vile, is less extreme than that? Where is the line drawn? The usefulness of constitutional free speech protections are that they are very broad. You can say that Jerry Falwell copulated with his mother in an outhouse because that means more reasonable opinions are also protected.
But what do we do when Google or Facebook or Twitter are the entities making the determinations, not courts or the law? What if governments try to take advantage of the situation, as in happening in some places right now?
We live online now, and we have to deal with the consequences of that. But the first priority of the new world we live in should be the protection of our fundamental rights. Without them, all the security in the world won't mean a thing.