Keep Factually independent
Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.
Fact check: What are the legal implications of creating and sharing fake videos of deceased individuals?
Executive Summary
Creating and sharing fake videos of deceased individuals raises a mix of civil, criminal, and regulatory risks tied to posthumous image rights, consent, defamation, and emergent AI-specific rules. Current debates emphasize gaps in existing law, varied national responses in 2025, and pressing enforcement and ethical challenges for platforms, families, and creators.
1. Key claims drawn from the reporting — tensions between grief tech and legal protection
Across the sources, a central claim is that AI-driven “grief tech” and deepfakes complicate traditional legal frameworks because they recreate deceased persons’ likenesses without clear consent mechanisms, posing emotional and reputational harms to families [1] [2]. Analyses argue that national proposals—such as Brazil’s PL 3.608/23—seek to address posthumous image rights but may be insufficient or incomplete, leaving unanswered questions about standing, remedies, and scope [2] [3]. Another claim stresses that platforms and existing laws are lagging, producing a patchwork response rather than a comprehensive solution [4] [5].
2. A global patchwork: how countries are reacting and why the timing matters
Recent reporting shows countries adopting varied approaches in 2025: the EU’s AI Act and Denmark’s new protections treat likeness and synthetic media as legally consequential, while US states such as Washington and Pennsylvania enact targeted deepfake statutes focused on fraudulent or harmful intent [5] [6]. India’s legal framework remains fragmented, relying on scattered IT Act and penal code provisions that leave gaps in definition and enforcement [4]. This divergence creates legal uncertainty for cross-border sharing of synthetic media, complicating civil claims and criminal prosecutions that depend on where content is created, hosted, or viewed [5] [4].
3. Consent, dignity, and the evolving notion of posthumous image rights
Analysts emphasize that consent is the legal and ethical fulcrum: absent express permission given while alive or clear statutory postmortem rights, families may struggle to prevent uses that they find offensive or exploitative [3] [1]. The concept of “likeness” is under pressure from new media that can convincingly reproduce voice and appearance, prompting calls for legislation that recognizes posthumous personality interests and protects the deceased’s dignity and the family’s privacy [3]. Observers caution that piecemeal bills risk leaving vulnerable groups unprotected and enabling commercial exploitation by grief-tech providers [2] [1].
4. Civil liability and criminal exposure: what creators and distributors face
Sources outline likely legal pathways: civil claims based on unauthorized use of image, invasion of privacy, or emotional distress; defamation in rare cases where a fabricated video injures reputation; and criminal laws where deepfakes facilitate fraud or impersonation [7] [8] [6]. Differences in statutory language mean that intent and harm often determine liability, with several jurisdictions penalizing fraudulent or malicious uses more heavily than purely expressive or artistic ones [6] [4]. Plaintiffs face evidentiary and jurisdictional hurdles when content spreads across borders and platforms [4] [8].
5. Platforms, enforcement bottlenecks, and the limits of takedown remedies
Analysts highlight that platforms act as crucial intermediaries but face inconsistent obligations: some new laws impose takedown duties or liability; others leave responsibility vague, relying on notice-and-takedown mechanisms that are slow and uneven [6] [4]. The TAKE IT DOWN Act-like measures bolster protections for intimate non-consensual imagery, but enforcement remains challenging—automated detection struggles with false positives and negatives, while victims often lack resources or legal status to demand swift removal [6] [4]. The consequence is persistent circulation despite legal norms.
6. Conflicting viewpoints and potential agendas among stakeholders
The landscape shows competing priorities: consumer-protection advocates and grieving families press for robust posthumous dignity laws, while tech firms and creators warn against overly broad restrictions that could chill innovation or artistic expression [1] [2]. National authorities pushing rapid regulation—such as Denmark and EU regulators—frame strict rules as rights-protecting, while some industry analyses emphasize operational burden and transnational enforcement costs [5] [4]. These positions reflect institutional agendas: legislators seek ready-to-enforce rules; platforms seek predictable standards; families seek emotional remedy and control [3] [5].
7. Practical takeaways: what families, creators, and policymakers should watch next
Given the fragmented 2025 picture, the immediate practical point is that risk turns on jurisdiction, consent, and intent; creators should secure clear permission and platforms should adopt transparent policies, while families should document objections and pursue statutory or takedown routes where available [2] [6]. Policymakers must refine definitions of likeness, clarify postmortem standing, and harmonize cross-border enforcement to prevent forum-shopping by bad actors. Observers will watch upcoming legislative refinements and court decisions that test how existing defamation, privacy, and IP laws apply to synthetic recreations [3] [4].
8. What to expect next: legal developments likely to shape outcomes
Analyses predict continued legislative activity and litigation through late 2025 and beyond as states and the EU refine AI and image-rights rules, and courts begin applying traditional torts to synthetic media cases [5] [6]. Expect a surge in strategic lawsuits and test cases that will clarify liability thresholds—particularly around intentional deception, commercial exploitation, and intimate deepfakes—and pressure platforms to develop faster remediation systems. The direction of these cases will determine whether the law favors broad protection of posthumous dignity or prioritizes expressive and commercial uses of AI-generated likenesses [2] [4].