Introduction

Countries around the world have varying definitions of harmful online content, and different models for regulating those harms. The United States (US) and China occupy opposite ends of the spectrum of online content regulation, with most other countries or entities, e.g.—India, the European Union (EU), the United Kingdom (UK), and Germany—falling in between the poles represented by the US and Chinese models.

First, this piece will briefly describe the spectrum of regulation for harmful online content, bracketed by the United States and China, before describing the models pursued by the aforementioned entities: India, the United Kingdom (UK) the European Union (EU), and Germany. It will lay out how each country or entity defines illegal or unlawful content, the legal and nonlegal mechanisms in place for removing this content, and the penalty regimes for noncompliance. Finally, this piece will conclude by comparing and contrasting the various models covered before charting the future of harmful online content regulation from the broader, global perspective.

The Two Poles: The United States and China

China and the United States form two poles along the spectrum of online content regulation throughout the world. China’s online content regime is among the strictest in the world while the United States’ is the most permissive, with the US government doing very little to regulate the internet domestically. China follows a strict liability framework, in which online platforms must “proactively monitor, filter and remove content in order to comply with the state’s law. Failing to do so places an intermediary at risk of fines, criminal liability, and revocation of business or media licenses.” The definition of what constitutes a harm in China is expansive, and consists of anything that may threaten national security, the Chinese Communist Party’s control over the country, and even minute matters, such as insulting national heroes.

The United States, on the other hand, has a long tradition of permitting speech that is prohibited by other countries, including other liberal democracies. Under US First Amendment law, most speech, including hateful and violent speech, is protected, unless the speaker is using “fighting words” or inciting “imminent lawless action” that could lead directly to violence, as per the US Supreme Court case Brandenburg v. Ohio (1969). The American tradition of free speech has influenced some other jurisdictions, such as India, where the courts have traditionally held that in order to ban hateful and violent speech, the government must show a proximate and direct connection between the speech and the imminent violence. However, for the most part, the American approach toward speech goes against the grain of much of the rest of the world’s approach. Even in other liberal democracies, the dignity of the individual, or the security and interests of the state, are held to be more important than the absolute freedom of speech.

The American approach to speech has strongly influenced its domestic regulatory regime toward online harms, which can be classified as a broad immunity approach. Under Section 230 of the Communications Decency Act (1996), often referred to as CDA 230, and subsequent case law, almost all online speech is legal in the United States. There are some specific legislative carve outs for sex trafficking, child pornography, and copyright violations, but otherwise, platforms cannot be held legally liable for the hateful content, defamatory speech, and breaches of privacy posed by individuals. Therefore, platforms where such speech may be posted—including Facebook, Twitter, and other sites—cannot be held liable for third-party posts.

This has been the subject of much domestic debate in recent years, with a bipartisan consensus emerging over the past few years that the United States Congress needs to amend or modify CDA 230 and the online harms regime. Members of both the Republican and Democratic parties want to reduce the scope of CDA 230 on the categories of “child sexual exploitation, content moderation operations…[and] content that courts determine to be illegal.”[1] The two parties diverge, however, on how to change CDA 230 in other ways. Many on the right want to combat political censorship, while many on the left want to encourage platforms to prevent hosting content that “‘incite[s] or engage[s] in violence, intimidation, harassment, threats, or defamation’ against others based on various demographics, including race, religion and gender identity.”[2]

However, at the present, individual companies may enforce their own community standards, which has ignited much domestic controversy over what can or cannot be posted on platforms used by hundreds of millions of Americans. While the standards used by Facebook and Twitter to remove hate speech are more similar to global norms, they are more restrictive than what is required by US law and First Amendment jurisprudence. Therefore, online social media platforms are increasingly subject to a battle between two norms: freedom of speech, and freedom from hate.

The American approach to online regulation of harmful content raises an interesting question: to what extent is government regulation of media content desirable, and would that regulate strengthen or weaken freedom of expression? The answers vary by jurisdiction.

The Rise of Conditional Immunity in Europe and Asia

The approach most countries or entities take toward the regulation of harmful online content falls in between the Chinese and American models. Many entities or countries, such as the UK and EU, have traditionally followed a conditional liability approach, in which intermediaries, online platforms such as Facebook, Twitter, and Google, would be “expected to remove or disable content upon receipt of notice that the content includes infringing material.”[3] Companies, and not individuals, would be held liable only if they did not remove this material. However, these platforms could not be “required to actively monitor and filter content…central to this approach is the idea of ‘notice and take down procedures’, which can be content or issue specific.”[4]

Lately, however, many countries are moving toward a conditional immunity approach, such as that which exists in Germany, and may soon be implemented in India, the EU, and the UK. In this approach, an online platform is only granted immunity from liability upon fulfilling specific statutory conditions, such as following local laws. Individuals who violate local laws on online platforms may also be liable for their online activities, as is now the case in India. The following sections describe how several jurisdictions deal with online content under these models.

India

India’s freedom of speech tradition was inspired by both British and American norms. While the Indian courts have accepted some elements of American First Amendment jurisprudence, they have also retained colonial-era British laws and censorship designed to maintain public order in an ethnically diverse country with dozens of ethnic, religious, and linguistic groups. India also retains laws penalizing sedition and defamation. The Indian courts have ruled that although there are limits to the freedom of speech for public order, security, and morality, the state must establish a “proximate” connection between violence and speech in order to ban it.

Currently, the regulation of online content in India is governed by the recently-framed Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which interpret the Information Technology Act, 2000. Under Section 3(1)(b) of the 2021 rules, online service providers may not host a variety of content, such as which infringes on India’s national security or leads to public disorder, as defined by previous laws. If an online platform does host such content, it would lose its liability shield, and could face criminal and civil charges for offending content. While the 2021 rules technically allow companies to regulate themselves, these bodies must also register with the Ministry of Information and Broadcasting, which is itself overseen by other government ministries, all of which provide “guidance” to online platforms.[5] The Indian government therefore works with various service providers to implement the rules, and in extreme cases, the government can block material on its own without waiting for the service providers to act. Thus, the new approach by the Indian government toward online content regulation is a conditional immunity approach, in that online platforms are only immune if they conform to the government’s rules. However, India’s regulatory regime is not as onerous as Germany’s, or the one being proposed in Britain, because it does not impose a centralized statutory regime on service providers, nor does it regulate them directly.

The UK

Britain has traditionally been a bastion of free expression. In the 19th century, revolutionaries throughout Europe would flee to Britain in order to continue to speak freely. On the other hand, British law also protects the dignity of the individual, which makes it easier to sue for defamation in that jurisdiction. In short, the British tradition is favorable toward free expression, but not to the extent of the American tradition. The UK recently just left the European Union, and is in the process of debating a new online harms bill in Parliament, the Online Safety Bill, 2021. If this bill were to be passed, OFCOM, Britain’s communication and broadcast regulatory body, would be granted new powers to regulate online speech.

While the goal of the bill is to enhance online safety, its definitions of what constitutes a harm is broad. For example, Section 46 of the aforementioned bill amorphously states that any content is harmful to adults if the online platform believes that “nature of the content is such that there is a material risk of the content having, or indirectly having, a significant adverse physical or psychological impact on an adult of ordinary sensibilities.”

The bill also proposes to grant OFCOM extensive powers, which have alarmed civil society and digital rights organizations. Under the bill, OFCOM could issue hefty fines, warning notices against companies that fail to remove illegal or terrorist content, and force services to use filters to identify and take down illegal content. This could impact the privacy and free speech of users. Thus, this proposed British legislation is therefore stricter, more proactive, and more intrusive than the statutory and regulatory regimes found in most liberal democracies. It is a clear example of a conditional immunity model, and approaches China’s strict liability model. While the government of Britain is democratically accountable, the proposed legislation could easily be misused to violate the human rights of British citizens and suppress free expression.

The European Union

The EU finds itself near the center but closer to the United States than Germany on the online regulation spectrum. In April 2021 a law was passed which requires social media platforms to remove terroristic content; however, most other unlawful content is removed by these platforms on a voluntary basis (in 2016, a voluntary Code of Conduct on countering the spread of illegal hate speech was established by the European Commission and IT Companies including Facebook, Microsoft, Twitter, YouTube, joined by Instagram and Snapchat in 2018, and TikTok in 2020). Currently, Germany and Austria are  the only EU-member states that have laws against hate and crime on social media platforms. France had passed its own legislation against hate and crime on social media platforms, the so-called Avia law, but the Conseil Constitutionnel deemed it unconstitutional due to incompatibility with freedom of opinion. There is new legislation, the Digital Services Act, that is likely to become law within the next year that would create broader enforcement mechanisms and place a higher level of liability on social media platforms to remove unlawful content. The act would replace all local rules in each member-state, establish enforcement through “clear responsibilities and accountability for providers of intermediary services, and in particular online platforms, such as social media and marketplaces,” and would establish a systematic and elaborate sanction regime against violators.

Germany

Germany’s Network Enforcement Act (Netzwerkdurchsetzungsgesetz or NetzDG) entered into full force on January 1, 2018. Commonly known as the “hate speech law,” NetzDG is arguably the “most ambitious attempt by a Western state to hold social media platforms responsible for combating online speech deemed illegal under the domestic law.” On the spectrum of content regulation models, Germany’s model is, through its passage of the NetzDG, closer to China’s model than to the rest of the European models as well as India’s model. The law requires large social media platforms to swiftly  remove what it has deemed as “illegal content,” (as defined in 22 provisions of the criminal code), which ranges vastly from insult of public office to threats of violence to terrorist activities. This includes a twenty-four hour removal period after receiving a complaint for “manifestly unlawful” content and requires the removal of all other unlawful content within 7 days of receiving the complaint.  NetzDG has a comprehensive and relatively steep sanction regime with fines ranging from five hundred thousand euros for failure to name an authorized person to  respond to requests for information to up to fifty million euros for flagrant, habitual or systematic noncompliance.

Two other aspects of this law that have been deemed draconian by numerous human rights advocates such as Human Rights Watch, Amnesty International, and Global Network Initiative, include the law not providing judicial oversight or a right to appeal content that has been removed. Without judicial oversight the burden is laid on the social media platforms to make difficult decisions concerning when content is unlawful and when it should be protected. Faced with steep fines and short review periods, “companies have little incentive to err on the side of free expression.”[6] Furthermore, without judicial oversight there is no judicial remedy, including the possibility to appeal a removal decision. This means that platforms can remove content that is lawful and thus violate a person’s freedom of expression without giving that person a chance to challenge the decision. As Human Rights Watch explained, this will inexorably result in “the largest platforms for online expression becom[ing] ‘no accountability’ zones, where government pressure to censor evades judicial scrutiny.”[7]

Ever since NetzDG came into effect, there have been a number of controversial deletions, many of which were likely lawful posts, but appeared to be unlawful when taken out of context. One such example of this was a post by the leftist politician Jörg Rupp, who wrote a satirical song in support of asylum-seekers. However, when a phrase of the song was taken out of context, it was deemed hate speech, the tweet swiftly removed, and Rupp’s account was blocked. Twitter merely provided a generic explanation for the removal, and did not give Rupp an opportunity to respond or appeal the decision.

Comparative Analysis

The above cases demonstrate that there is a broad range of practices and norms across the world in regard to the regulation of harmful content online. While the cultures and politics of different countries may yield varying policies, it is still possible to extrapolate some of the best regulatory practices from these cases. In considering best practices, we should keep in mind how best to balance between the freedom of speech, expression, and press on one hand, and the freedom to be safe from hatred and bigotry—and violence stemming from those—on the other hand. Another consideration is how countries should balance the regulation of online speech between government regulation on one hand, and private, corporate regulation on the other. In some countries, freedom of expression may be facilitated through less government regulation, as is the case in Pakistan, because corporations following global standards are less likely to censor content than the government. However, in other jurisdictions, governments may provide a safeguard against corporate overreach and arbitrary censorship. For example, civil rights groups in Germany are concerned that a recent government initiative to outsource the enforcement of its hate speech laws to corporate platforms will result in companies calling the shots, without appropriate review by the government.

The impact that a regulatory model has on the level of a country’s freedom of expression is a vital factor in considering which models are best. At one end of the spectrum of the regulation of online content is the United States, which zealously protects freedom of speech. The prioritization of the freedom of expression has resulted in a laissez-faire approach to online content regulation. However, this has not resulted in a complete lack of regulation, because online platforms have established their own community standards in order to police content on their sites. This has allowed for a high degree of free expression on these platforms, leading to robust public dialogue, and more informed citizenry. The American approach also protects citizens from government overreach.

However, prioritizing freedom of expression has its share of criticisms.  First, many companies can do as they please, without accountability. Sometimes they may restrict speech in an arbitrary fashion, contrary to the spirit of a free people. At other times, they may use algorithms to promote speech that has a distorting impact on public life. A lack of regulation also allows for the spread of misinformation, as well as hate speech, and subsequently violence in some cases, as in Myanmar and Ethiopia. It is for these reasons that most other countries are not as permissive as the United States in regard to freedom of speech on the internet.

Yet, at the other end, the Chinese model of online regulation is not attractive to most countries. The Chinese government places a high burden of compliance on all online platforms, and maintains an extensive censorship regime, sometimes nicknamed the “Great Firewall of China,” to control the internet down to its minute details. When tennis player Peng Shuai recent accused a prominent Communist Party official of sexual harassment, her posts were taken down and scrubbed from the Chinese internet within minutes, and online searches for her name did not yield results for a while. Other countries, such as Thailand, have draconian internet laws similar to those of China. Under Thailand’s 2016 Computer-Related Crime Act (CCA), illegal and immoral content can be banned by courts based on “requests” from a ministry appointed by the government. Authorities in Thailand have been particularly strict in enforcing the country’s lèse-majesté, and have handed out extreme sentences for insulting the monarchy. Thus, while the Chinese approach certainly prevents harmful and hateful content from being posted, it is also antithetical to democracy. The Chinese model is unlikely to be replicated by democratic countries.

Most democratic countries, including those in Europe and Asia, are therefore implementing a regulatory model that is in between the American and Chinese poles. This has recently led to a general push towards establishing online content regulations which hold social media platforms and intermediaries responsible for at least some kinds of online content. While unique cultural and socio-political factors make a blanket model unrealistic, the key to a successful model would be one that appropriately strikes the right balance between prompt removal of harmful and dangerous content without chilling speech and open dialogue. This would be a model that focuses primarily on regulating the most harmful content such as content that promotes or assists in terrorist activity, while leaving other content such as hate speech for the social media platforms to self-regulate through community standards. At the very least these regulations should not place onerous regulations on social media platforms such as inflexible deadlines which force these platforms to erroneously remove content that may not be unlawful in order to be in compliance.

The EU currently may provide a good example of finding the right balance by having strict regulations in place for the removal of terrorist-related content, but other content is regulated through a voluntary Code of Conduct and community standards established by social media platforms. However, the Digital Services Act, if passed, would establish a more rigid and restrictive regulatory system.

Conclusion

Ultimately, each country must decide who is best equipped to effectively regulate online content. The United States has unambiguously decided that it is not appropriate for the government to have this control. John Marshall Harlan II, a justice of the US Supreme Court, put it eloquently in the landmark case Cohen v. California (1971):

The constitutional right of free expression is powerful medicine in a society as diverse and populous as ours. It is designed and intended to remove governmental restraints from the arena of public discussion, putting the decision as to what views shall be voiced largely into the hands of each of us, in the hope that use of such freedom will ultimately produce a more capable citizenry and more perfect polity and in the belief that no other approach would comport with the premise of individual dignity and choice upon which our political system rests.[8]

But a one-size-fits-all model is naïve and unrealistic. The point upon the spectrum that a country should lie depends on its idiosyncrasies, including its unique socio-political, religious and cultural contexts. Nevertheless, as a country moves towards China on this spectrum, there will be a decrease in values such as the freedom of expression, but a move toward the American model would be a move away from regulation and all that could entail. While unique cultural and sociopolitical factors would make a universal model for online content regulation unrealistic, the key to a successful model would be one that strikes the right balance between the prompt removal of dangerous content—using narrowly-tailored definitions of harms—without having a chilling effect on free speech and open dialogue. Democratic countries should thus resist hastily created regulations that are “vague, overbroad, and [turn] private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.”[9]

This paper has been prepared by Akhilesh (Akhi) Pillalamarri and Cody Stanley under the supervision of Professor Arturo Carrillo, on behalf of Civil and Human Rights Law (CHRL) Clinic at the George Washington University Law School.

Notes:

[1] Anand, Meghan et al. “All the Ways Congress Wants to Change Section 230.” Slate. 23 March, 2021.

[2] Bedell, Zoe and John Major. “What’s Next for Section 230? A Roundup of Proposals.” Lawfare. 29 July, 2020.

[3]Trends in Censorship by Private Actors.” Media Defence. Retrieved 1 December, 2021.

[4] Ibid.

[5] Ahooja, Raghav and Torsha Sarkar. “How (Not) to Regulate the Internet: Lessons From the Indian Subcontinent.” Lawfare. 23 September, 2021.

[6]Germany: Flawed Social Media Law.” Human Rights Watch. 14 February, 2018.

[7] Ibid.

[8] Cohen v. California, 403 U.S. 15, 15, 91 S. Ct. 1780, 1783 (1971).

[9]Germany: Flawed Social Media Law.” Human Rights Watch. 14 February, 2018.

Author Biographies:

Akhilesh (Akhi) Pillalamarri is a student-attorney at the George Washington University Law School’s Civil and Human Rights Law (CHRL) Clinic and the Moderator-in-Chief of the International Law and Policy Brief (ILPB). His interests include international & comparative, national security, and technology law. Follow him on Twitter @AkhiPill.

Cody Stanley is a student-attorney at the George Washington University Law School’s Civil and Human Rights Law (CHRL) Clinic and is a legal volunteer with the International Refugee Assistance Program (IRAP). His interests include public interest, corporate accountability, and refugee and asylum law.