Who decides if someone has hate in speech
Executive summary
Who decides whether speech counts as “hate” depends on the setting: governments, courts, international bodies, tech platforms and community actors all play roles—and their rules differ sharply. There is no single global legal definition; for example, U.S. law has no legal category “hate speech” and protects most such expression under the First Amendment [1] [2], while UN and European instruments use broader policy definitions and ask platforms and states to act [3] [4].
1. The legal referee: courts and governments — different rules in different countries
Nation-states and their courts decide what speech is illegal within their jurisdictions; some countries criminalize public incitement to hatred or violence, others (notably the United States) decline to label “hate speech” as a separate legal category and instead restrict only narrow unprotected classes like incitement, true threats or harassment [5] [2] [4]. International human rights law does not offer a single binding definition, so national criminal laws vary widely and determine enforcement in practice [6] [7].
2. The U.S. exception: constitutional protection alters who can “decide”
In the United States, the government’s power to prohibit speech is constrained by the First Amendment; courts have repeatedly made clear that most offensive or hateful expression remains protected, so judges—not legislatures acting without constitutional guardrails—often determine whether particular statements cross into unprotected categories [1] [2]. As a result, private actors and institutions (see below) frequently become the de facto enforcers of norms that the state cannot constitutionally police [8].
3. International policy makers and human-rights bodies set norms, not uniform law
The United Nations and regional bodies publish working definitions and strategies—e.g., the UN Strategy and Plan of Action and Council of Europe guidance describe hate speech as communication that attacks or discriminates on identity grounds—but explicitly acknowledge there is no universal legal definition and that many forms of expression remain protected under freedom of expression standards [3] [6] [4]. Those instruments shape state policy, funding and cooperative frameworks without replacing domestic law.
4. Platforms and private moderators: the everyday deciders online
Social media companies craft content policies that identify protected characteristics and operational definitions of hate speech; platforms then enforce those rules at scale, using combinations of automated detection and human review. Facebook/Meta’s public discussion highlights how platforms choose which terms or contexts to remove and why they sometimes adapt decisions to local conflicts [9]. Governments are increasingly pushing platforms to disclose moderation practices or meet removal deadlines, shifting the practical locus of decision-making toward private companies under public pressure [10] [11].
5. Investigators, prosecutors and police: when speech becomes crime
When speech is alleged to meet a country’s criminal threshold—such as incitement to violence or bias-motivated criminal acts—law enforcement agencies and prosecutors evaluate evidence and intent and bring charges under hate-crime or public-order statutes; federal hate-crime frameworks (e.g., U.S. DOJ guidance) and national statutes define when speech-linked conduct becomes prosecutable, including sentencing enhancements for bias motivation [12] [13]. Who decides at that point is shaped by statutory definitions and prosecutorial discretion.
6. Universities, employers and community standards: institutional adjudicators
Colleges, workplaces and civic bodies maintain codes that may ban “hate speech” even where the state does not; those institutions create their own processes for complaints and discipline, sometimes constrained by local law (for example, campus speech codes meet state or constitutional limits in the United States) [8] [14]. These actors often balance inclusion and free-expression goals differently from courts or platforms.
7. Civil society, journalists and the public: norm-makers and arbiters of harm
Advocacy groups, journalists and social movements influence definitions and pressure decision-makers; they frame certain rhetoric as hate, document harms and lobby for legal or platform changes. International NGOs and research bodies warn that online hate can precede violence and press for preventive measures—an argument that underpins many policy efforts even where legal bans are limited [15] [16].
8. Why there is persistent disagreement — and where to look for clarity
Disagreement stems from three facts reported across sources: definitional vagueness—no single accepted legal definition exists globally [6] [3]; divergent legal traditions—some countries criminalize broader categories, others protect speech robustly [5] [17]; and competing values—protecting social peace and vulnerable groups versus guarding free expression and avoiding government overreach [18] [2]. To know who decides in a particular case, consult the applicable domestic law, platform policy or institutional code cited by the reporting or the convening body [4] [9].
Limitations: available sources do not mention detailed case law beyond examples noted above, nor do they provide treatment of every national legal regime; readers should consult jurisdiction-specific statutes, platform terms and recent court rulings for concrete determinations [1] [10].