Index/Topics/California's Transparency in Frontier AI Act

California's Transparency in Frontier AI Act

California's law requires top AI companies to publish documents on safety practices and report incidents

Fact-Checks

12 results
Jan 26, 2026
Most Viewed

How do major AI model providers (OpenAI, Anthropic, Meta) respond to subpoenas and preservation requests in practice?

Major new and pending laws are building a legal framework that can compel disclosures about AI training data and create subpoena pathways for copyright owners—most explicitly the bipartisan proposed i...

Jan 20, 2026
Most Viewed

How have regulators and platforms responded to AI-generated medical scams like Neurocept?

Regulators and platforms have responded to AI-enabled medical scams with a mix of sharper enforcement, state-level disclosure mandates, updated agency guidance, and nascent platform accountability mea...

Jan 23, 2026
Most Viewed

How do regional laws influence content moderation policies for AI video generators like Grok?

Regional laws are already forcing such as to bake local legal constraints into their moderation systems, producing stricter takedown flows, forbidden-content filters, and provenance-labeling requireme...

Jan 13, 2026

Have law enforcement agencies used AI-generated tips to obtain warrants for CSAM investigations?

There is clear, widespread reporting that AI is being used to create and proliferate child sexual abuse material (CSAM) and that law enforcement is struggling to adapt, but the sources provided do not...

Jan 26, 2026

Are there published transparency reports or legal challenges that show how Duck.ai or its providers handled government demands for chat data?

Published reporting and company pages in the provided corpus show has released privacy-focused marketing, a privacy/terms page, and help documentation stating Duck.ai chats can be anonymous, but there...

Jan 15, 2026

What actions have regulators taken against fake‑endorsement health ads using AI‑generated videos?

Regulators at multiple levels have begun a coordinated push to stop AI‑generated “fake doctor” or celebrity endorsement ads for health products, using enforcement operations, new disclosure laws, guid...

Jan 13, 2026

How do California’s SB 53 and SB 942 create reporting or transparency obligations that affect moderation of sexual content involving minors?

California’s SB 53 and SB 942 insert new transparency and user-facing provenance obligations into the state’s fast-evolving AI and platform rulebook that intersect with how platforms detect, label, an...

Jan 6, 2026

What legal or platform remedies exist to stop paid ads that impersonate public figures in the U.S.?

A patchwork of state laws, pending federal proposals, platform transparency rules and takedown mechanisms now offer the principal remedies in the U.S. to stop paid ads that impersonate public figures,...

Feb 4, 2026

What technical and policy measures are platforms using to detect and block deepfake ads for supplements, and how effective are they?

Platforms combine automated forensics, AI classifiers and ad-review policies with human moderation and disclosure rules to try to detect and block —including those pushing —but their tools lag behind ...

Jan 18, 2026

How have regulators and consumer‑protection laws adapted to AI‑enabled fake endorsements?

Regulators have moved from warning to action: U.S. federal agencies—most visibly the Federal Trade Commission—have updated rules and launched enforcement against AI-enabled fake endorsements and revie...

Jan 13, 2026

What legal protections or liabilities exist for AI companies that monitor or 'report' suspected CSAM in user prompts or chat logs?

AI companies that scan user prompts or chat logs for suspected child sexual abuse material (CSAM) sit between clear statutory reporting duties and growing state-level AI and transparency requirements:...

Jan 7, 2026

Have any platforms disclosed automated reporting pipelines that send user prompts or chatlogs to law enforcement?

Public disclosures show major tech companies describe how they respond to legal demands and in limited cases automatically report specific illegal content (for example, suspected child sexual abuse ma...