From hate speech to good speech: moderating online content in a free society

Good Society Forum
7 min readJul 28, 2020

By the early 2010s, social media platforms had connected the world and provided a platform for those voices that had previously been muted. There was a tangible excitement for the potential that platforms like Twitter and Facebook represented in the fulfillment of the democratic values of free speech, equality and participation. Movements such as the Arab Spring seemed to support this techno optimism.

However, by 2020, we have learned that social media platforms can be used for brutal oppression and against democratic values — clear examples of this are the incitement to genocide in Facebook against Rohingya Muslims in Myanmar, Russian interference through social media platforms in the 2016 US elections, or the cases in India where WhatsApp were used to organize lynchings. Just as social media can be used by activists to organize protests, it can be used to track them down by repressive governments. Just as it can be used to denounce human rights abuses, it can be used to incite violence. Just as it can be used to spread speech that will help build the good society, it can be used to spread hate speech that can tear it apart. This begs the question of how and who should moderate online content in a society that values free speech, but that recognizes that sometimes speech is limited by other human rights.

On June 22, 2020, we brought together experts and practitioners from different backgrounds to help us analyze this question. We were joined by Jason Stanley, Professor of Philosophy at Yale University; Priscilla Ruiz, Legal Coordinator for Digital Rights for Article 19’s office for Mexico and Central America; Imran Ahmed, CEO and founder of the Centre for Countering Digital Hate in the UK; and Murtaza Shaikh, Co-Director of think tank Averroes.

“Social media manufactures dissent”, argued Jason Stanley, underscoring the design problems of social media platforms while making reference to Noam Chomsky and Edward S. Herman’s famous book in propaganda studies, Manufacturing Consent. Unlike mass media that is used to “manufacture consent”, social media “creates a mechanism that splits us apart, cleaves society and creates masses of outrage”. Stanley explains how outrage is prevalent in the platforms because outrage keeps people hooked to the platform, and thus content that elicits this response is incentivized by the platforms themselves. Although, he warns, outrage is neutral — it can be used by democratic movements or authoritarian leaders — , lately it has fed antidemocratic movements. “In a democracy, your opponent is not someone you are supposed to be outraged with, it is someone that you are supposed to sit down and compromise with.”

Stanley also noted how power relations of speech have been shifted by social media platforms. “Traditionally, we think of the speech directed by the powerful against the powerless as particularly dangerous and incendiary and threatening.” But the internet has allowed for masses of powerless people to gang together and direct outrage against you and this can be as damaging as being denounced from those on the top.

Priscilla Ruiz, of Article19’s office for Mexico and Central America, highlighted that, although freedom of speech is not absolute, it is essential to foster a tolerant, pluralistic and diverse democratic society. Addressing the problem of hate speech carries many challenges, and the first one is that there is no uniform definition of hate speech under international human rights law. “It is a broad concept that captures a wide range of expressions, and everything can be hate speech from an individual’s perspective and interpretation.” Thus, hate speech can be weaponized by those in power as an excuse to silence those that oppose them, including journalists. Yet on the flip side, hate speech is real and can be used to silence individuals and, furthermore, to put their integrity and livelihood at risk.

Article19 provides a toolkit for identifying hate speech here.

To solve such complex issue, Ruiz suggests we turn to international standards that have addressed this issue in the past and that have identified which speech must be excluded from the scope of the legal protection of the right to freedom of expression (including war propaganda, incitement to genocide, apology to hatred that constitutes incitement to violence, and child pornography), and which should be protected despite being offensive, shocking or distasteful. As a defender of free speech, Priscilla Ruiz warned of poorly drafted initiatives that have been presented in some countries, such as Mexico, that intend to criminalize hate speech but that set the ground for governmental abuse.

The Centre for Countering Digital Hate (CCDH), Imran Ahmed noted, addresses the problem through an alternative approach that does not depend on regulations. A skeptic of government actors, he instead suggests that society should negotiate rules of what content is acceptable online. Civil society has a fundamental role in this task, pointing the finger at misinformation and hate speech and exposing actors that are doing it. This is what CCDH does, showing advertisers when their brand appears alongside hateful content and using this to encourage companies to address the issue on their platforms. Ahmed also criticized platforms as they are not designed to entice public debate, but to keep people engaged, and that results in the prioritization of controversial content. This is a fundamental problem in the design of platforms.

“Don’t feed the trolls” is a practical guide by CCDH to deal with Hate Speech in Social Media

Murtaza Shaikh, like Ruiz, noted that social media companies have struggled to define hate speech in their community standards (the rules set by the platforms to regulate user behavior) and have not followed the approach of international human rights standards. The biggest problem of community standards, he argues, is that they lack the quality of legality. Anyone that reads them should be able to understand what hate speech is to comply with the rules, but that is not possible with the information provided. For example, Facebook can decide to leave up content if it is considered “newsworthy” even when it falls under the category of hate speech. The exception of newsworthiness is so wide that it gives Facebook too much discretion in the application of the rule.

Stanley and Ahmed highlighted the difference that exists between the way people behave in offline and online forums. It seems surprising that people who are civilized in real life can be so hostile online. But this is because social norms are different online and offline. Ahmed proposed that society should treat hateful content online in the same way as we treat it offline, with real life consequences. “The reason why social media is such a cesspit is because we have allowed it to become a cesspit.” But he is optimistic we can re-socialize social media if we chose to do so. Stanley backed Ahmed’s reasoning by noting that John Stuart Mill, in 1859, argued that the only reason free speech in England worked was because of the social norms set in place.

When discussing how platforms should draw the line of hate speech, Ruiz and Shaikh underscored the importance of human rights standards and their interpretation in the light of the linguistic and cultural context of each country. Considering the context where content is moderated represents a challenge, particularly because most of the internet platforms are located in Silicon Valley and have not been quick and effective enough to understand the cultural differences of the many countries where their products reach. For example, in many Latin American countries with authoritarian tendencies, governments have tried to co-opt platforms to introduce rules in their community standards that would be favorable to them. That is why, Ruiz explains, they have pushed to not regulate the internet. “Internet is not the cause of all problems… what you see on the Internet is a reflection of a society that has always been producing hate speech.” Thus, we have to be very careful in what we do to prevent hate speech.

Murtaza noted that everyone — governments, social media companies, and even the UN — is trying to regulate the internet, but only a multilateral international effort will be effective because this is a multilateral international issue. Stanley, while agreeing that international frameworks are essential, warned that Big Tech and proto-fascist governments will push back.

Finally, Ahmed, voiced that this is a question of willingness to act, as internet platforms have the technical capacity to get rid of content on their platform and they have done so in the past when content was made illegal. He also underlined that censorship, while dangerous in the hands of dictators, is necessary and we should not shy away from the debate of what things must be censored. “When it came to Myanmar, would it not have been right for Facebook to censor the government’s own propaganda of dehumanizing Muslims that led to a genocide?”

There are no easy solutions to this difficult and complex issue: it will require the ongoing participation of civil society, journalists, social media platforms, and governments to ensure that an adequate balance among the multiple rights at stake is reached. At the Good Society Forum, we will continue to provide a space for this important conversation.

The full webinar is available on YouTube

by Juan Carlos Salamanca, Director of Information Technology and Policy at the Good Society Forum

--

--

Good Society Forum

The Good Society Forum is a community of change-makers around the world with a common quest to building the good society.