Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Goal: 1,000 supporters
Loading...

What are the main criticisms of Bill C-63 in Canada?

Checked on November 11, 2025
Disclaimer: Factually can make mistakes. Please verify important info or breaking news. Learn more.

Executive Summary

Bill C-63 draws sustained criticism for broad speech restrictions, severe criminal penalties, and novel “pre‑crime” powers that civil‑liberties groups, opposition parties, and legal scholars warn will chill lawful expression and risk wrongful punishment [1] [2]. Critics also argue the bill’s regulatory approach to platforms is either too intrusive—forcing over‑moderation and mass monitoring—or too narrow and self‑regulatory, failing to address algorithmic design and adult harms, leaving enforcement and proportionality unresolved [3] [4] [5].

1. Why free‑speech alarms are central: the chilling effect everyone fears

A central contention is that Bill C‑63’s new offences and regulatory duties give governments and regulators tools that are vague enough to suppress controversial but lawful speech, prompting platforms to over‑remove content to avoid penalties. Civil‑liberties organizations and legal experts repeatedly point to the bill’s undefined categories such as “legal but harmful,” the expanded hate‑speech constructs, and the ability to stack a new hate offence on existing crimes as creating uncertainty that incentivizes blanket takedowns rather than nuanced adjudication [1] [6] [2]. Opponents warn this chilling effect will not only deter extremists but will also silence political dissent, satire, and reporting on sensitive subjects, especially when administrative bodies rather than courts make removal or penalty decisions [1] [7].

2. Criminal penalties and “pre‑crime” orders: punishment before guilt?

Critics single out the bill’s criminal provisions for disproportionate outcomes, including provisions that could produce life imprisonment for hate‑motivated offences and the introduction of recognizance‑style orders that restrict liberty before any conviction. The “hate‑crime peace bond” or section 810.012 model allows judges, with provincial consent, to impose curfews, tracking, or other conditions on people judged likely to commit an online hate offence—effectively punishing prospective risk rather than proven conduct, which civil libertarians say undermines fundamental criminal‑law principles [1] [2]. Legal commentators and opposition politicians argue these features create perverse incentives for plea bargains and raise real risks of wrongful confinement or conditional restrictions absent the procedural safeguards of criminal trials [2] [8].

3. Human rights complaints and administrative sanctions: a tribunal with teeth

Reintroducing a Human Rights Tribunal pathway for online hate complaints is another flashpoint: the tribunal could levy fines up to $70,000 and accept complaints that critics say will flood the system and risk anonymous or politically motivated cases. The bill revives a problematic enforcement channel where administrative remedies, not criminal courts, carry heavy financial and reputational penalties—creating a scenario where non‑judicial actors decide complex speech disputes without the evidentiary and procedural protections of the criminal process [1] [7]. Observers warn this could divert commission resources, encourage strategic complaints, and substitute tribunal judgments for public debate and judicial determinations [7].

4. Platforms, algorithms, and the argument the bill misses the point

A distinct criticism from technology and policy experts is that Bill C‑63 focuses too heavily on content categories and enforcement while ignoring platform design drivers—notably algorithmic feeds that amplify harm and addict users. Critics propose that without rules requiring opt‑outs of recommendation algorithms or limits on data‑driven amplification, the bill will leave the root causes of online harm intact and compel platforms into blunt moderation practices that do not reduce systemic risk [4] [5]. Industry critics and some civil‑society actors thus argue the legislation is simultaneously overbroad on expression and underbroad on technological fixes, creating a policy mismatch that cannot be solved by takedowns alone [4] [5].

5. Enforcement practicality, political framing, and competing agendas

Beyond legal theory, there are pragmatic critiques about enforceability against global platforms, unclear standards for “harm,” and the potential politicization of regulatory powers. Proponents frame C‑63 as protecting children and marginalized groups; opponents highlight how those same powers could be used to suppress dissent or become enforcement theatre that fails to curb real harms. Various stakeholders—civil‑liberties NGOs, conservative lawmakers, tech policy centers—bring differing priorities: some stress liberty and due process, others demand stronger protections for victims and say the bill does not go far enough [3] [5] [2]. These competing agendas explain why critics simultaneously call for parts of the bill to be severed, narrowed, or redesigned to focus on algorithmic reform and proportional, judicially‑anchored remedies [9] [7].

Want to dive deeper?
What is the main purpose of Bill C-63 in Canada?
Who are the key supporters of the Online Harms Act?
How might Bill C-63 impact online platforms in Canada?
What specific provisions in Bill C-63 raise privacy concerns?
Has Bill C-63 faced amendments or delays in 2024?