If you’re anything like me, you’ve probably had the following ‘debate’ multiple times on the internet:
Me: “I’m happy X got banned from Twitter. All they did was incite hatred.”
Him/her: “I believe in free speech. Censorship is bad.”
Me: “But he was infringing on the rights of others to feel safe and spreading hateful rhetoric.”
Him/her: “Free speech is the cornerstone of any healthy society. I may disagree with what someone is saying, but I will defend to the death their right to say it.”
Before you ask, no, I’m 90% certain none of them have actually read Voltaire other than that one quote. Neither have I, but then, I don’t use quotes from people I don’t know. Unless that was one just now. Aw, sh-.
Free speech absolutists argue that there should be minimal to no limitations to free speech, regardless of context. In other words, this does not only apply to government interference as most legal definitions of free speech imply, but also extends to private companies and the like. It is a stance that is popular with the alt-right as they like to accuse their opponents of lacking emotional resilience, and of too quickly turning to what the alt-right believe to be censorship to silence these undesirable views. Thus, they strongly oppose the banning of people like Alex Jones and Milo Yianoppoulos by private companies, claiming that simply being ‘mean’ or hurtful should never be grounds for ‘censorship’.
Now, how do you counter this? After all, to the untrained eye it is a somewhat reasonable stance, which certainly is not exclusive to the alt-right: free speech is extremely important, and stifling it, even by private companies, should only be done with extreme caution. Your opponent can easily accuse you of supporting censorship, and we have all learned how buzzwords and sentences like those can sway the public opinion.
However, the concept of free speech does not, cannot, and should not ever exist in a vacuum.
First off, let’s talk about the actual act of banning someone from a private platform. Since this isn’t so much a debate on legality rather than morality, it’s no use trying to delve into whether or not these private parties are allowed to ban people from their platforms (short story: they are). Free speech absolutists treat free speech not as a legal right, but as a moral one. Thus, we need to find a good moral reason to support the act of banning, to counter the supposed infringement on free speech that the alt-right likes to argue it is. I’m not talking about examples of hate speech, as those vary from individual to individual, but overarching motivations that prove that banning people like the Alex Joneses and Milos of this world can have a positive impact on society.
To do this, one tool that can prove useful is the theory of ‘social contagion’.
Social contagion is a term used in sociology that argues that observing or interacting with certain behavioral phenomena can lead to people, intentionally or unintentionally, copying those behaviors, often in the face of logic or reason. One can think of social phenomena like mass hysteria, which can even lead to deaths as the ‘Dancing Sickness’ of 1518 showed. Indeed, as the article notes
The results of contagion research suggest that just as we do not choose to be infected with, and pass on, biological contagions, we often behave as if we have little control over the culture we become infected with and consequently spread.
Perhaps the most impactful recent example of the illogical nature of social contagion has been Donald Trump’s rise to the position of US president. Trump’s primary appeal was that he appeared to be ‘one of the people’, despite the bizarre nature of such a statement applying to a millionaire who lived in a golden tower. But the theory of social contagion can explain this seemingly illogical stance as part of an ‘infection’(not to be understood in a negative sense, but purely a biological one) which afflicted people who found themselves repeatedly exposed to such rhetoric.
Now, an easy rebuttal to this suggestion is that plenty of people got exposed daily to the slogans and rallying cries of Trump and his followers, yet he is disliked by a large part of the population. Why haven’t those people become ‘infected’? Well, there are of course other factors at play than simply exposure to a certain phenomenon. Just like not everyone gets the flu during the winter, certain features can increase the odds of a social contagion to spread. The frequency of exposure, prior beliefs and motivations, and of course the most obvious of all, conscious choice. While very few people would willingly accept being infected by the flu, a person who already has negative views on a certain concept can willingly accept and spread a social contagion which further proliferates those views.
As social contagion can apply to every aspect of society (including, for instance, finance and consumerism) social media is of course not exempt. In fact, it might be even more vulnerable than other fields due to its highly interactive nature. Such an approach to social media, and the messages posted there, directly counters notions that those people who espouse violent or inflammatory rhetoric exist in a guiltless vacuum, and that it is entirely up to their audience to decide how to absorb their words.
Continued exposure to a group which, say, repeatedly claims that the media and the left wing are enemies of the people, can influence a person to make some very unreasonable decisions. Social contagion can, for example, also explain the increase in harassment actress Leslie Jones endured after Milo Yianoppoulos decided to directly target her for harassment. Witnessing a prominent figure like Yianoppoulos openly harass someone, and seeing that at the time, Twitter did not immediately react to put a stop to it, made these people believe that such behavior was acceptable and even warranted. These people were looking for an excuse to espouse their bigoted views, and Milo and Twitter provided them one.
So does that mean that when a guy walks into a pizza place with a rifle that Alex Jones is to blame? No, not directly. But he can very much be held accountable for proliferating such views and lending them credibility, thereby increasing the likelihood for someone, who perhaps might already not be entirely sound of mind, to take personal action.
This is why it is important to ban these individuals and groups wherever they sprout up. Why should their ‘right’ to free speech have precedence over other people’s rights to feel safe? Why should this dogged loyalty to such a nebulous concept have precedence over the concrete, real life consequences the spread of these types of rhetoric can have? Does this mean that I am arguing for a dismissal of free speech altogether? No, of course not. The goal here is to show that free speech is in no way an absolute: it has limits and boundaries, and these can be reevaluated whenever a new context presents itself.
And before anyone proudly proclaims that free speech cannot by nature have boundaries: that would mean that no nation anywhere has free speech, because it is completely unrealistic. By all means, go out and shout ‘fire’ in a crowded hallway, and see how far your free speech arguments get you.
Next, we need to answer the most obvious question of all: does it work? Does banning a hateful individual or group better these private platforms as a whole? As it turns out, yes. Yes, it does.
A few years back, the extremely popular user-driven site Reddit banned a number of hateful subreddits (sub-forums that specialize in certain subjects), such as the openly racist Coontown. At the time, the act was derided as either a blatant act of censorship or even counterproductive, as it was feared that removing their private gathering grounds would make it so these individuals would spread across the site, infecting other subreddits.
A lengthy study proved otherwise. The banning of these hateful subreddits led to a direct decrease in hateful speech on Reddit as a whole. Many accounts that previously had been engaged in virulent hate either saw the amount of hate they espoused greatly diminished, or discontinued their accounts altogether. Likewise, there was no evidence that other subreddits had been negatively impacted by the supposed ‘migration’ of the denizens of these hateful communities. To quote the findings of the article
For the definition of “work” framed by our research questions, the ban worked for Reddit. It succeeded at both a user level and a community level. Through the banning of subreddits which engaged in racism and fat-shaming, Reddit was able to reduce the prevalence of such behavior on the site. The amount of hate speech generated across Reddit by treatment users went down drastically following the ban. By shutting down these echo chambers of hate, Reddit caused the people participating to either leave the site or dramatically change their linguistic behavior (31:17)
When they find their ideas openly denounced, these people either reduce the amount of hate speech they spout, or just leave the platform altogether.
So, what does this tell us? That banning hateful communities is healthier for a platform as a whole? Certainly. But a ‘counter’ to that (which you will inevitably encounter when arguing with free speech absolutists) is the question of who gets to decide what hateful speech is? It is a preferred tactic of the alt-right and other reactionary groups: to act as if the people being banned operated on some nebulous border that wasn’t overtly hateful or inflammatory. From this, they then usually pull out the slippery slope argument: if these people got banned, what’s stopping these platforms from going even further?
It is a logical fallacy, of course, because we have no way of knowing if it’s ever going to happen. You cannot truly argue against it (because it’s not a logical argument), which is why it is so liked in reactionary circles. These groups always ignore that, thus far, the people who got banned from such platforms as Twitter or Reddit were very obviously hateful. Do we really need to double check if a community called BeatingWomen was violently misogynistic(and yes, that community really existed, and was allowed to exist for a long time on Reddit)? Likewise, Milo Yianopolous repeatedly harassed actress Leslie Jones and encouraged his followers to do the same, to the point where Jones herself quit the platform. As far as I can tell, this supposedly slippery slope seems pretty damn solid.
If you as an individual or group feel threatened by these kinds of crackdowns by social media on hate-speech and the like, I can only advise you to do some serious soul-searching to truly see where your own convictions lie.