Index/Topics/California's Transparency in Frontier AI Act

California's Transparency in Frontier AI Act

California's law requires top AI companies to publish documents on safety practices and report incidents

Fact-Checks

8 results
Jan 15, 2026
Most Viewed

What actions have regulators taken against fake‑endorsement health ads using AI‑generated videos?

Regulators at multiple levels have begun a coordinated push to stop AI‑generated “fake doctor” or celebrity endorsement ads for health products, using enforcement operations, new disclosure laws, guid...

Jan 13, 2026
Most Viewed

Have law enforcement agencies used AI-generated tips to obtain warrants for CSAM investigations?

There is clear, widespread reporting that AI is being used to create and proliferate child sexual abuse material (CSAM) and that law enforcement is struggling to adapt, but the sources provided do not...

Jan 13, 2026
Most Viewed

How do California’s SB 53 and SB 942 create reporting or transparency obligations that affect moderation of sexual content involving minors?

California’s SB 53 and SB 942 insert new transparency and user-facing provenance obligations into the state’s fast-evolving AI and platform rulebook that intersect with how platforms detect, label, an...

Jan 6, 2026

What legal or platform remedies exist to stop paid ads that impersonate public figures in the U.S.?

A patchwork of state laws, pending federal proposals, platform transparency rules and takedown mechanisms now offer the principal remedies in the U.S. to stop paid ads that impersonate public figures,...

Jan 20, 2026

How have regulators and platforms responded to AI-generated medical scams like Neurocept?

Regulators and platforms have responded to AI-enabled medical scams with a mix of sharper enforcement, state-level disclosure mandates, updated agency guidance, and nascent platform accountability mea...

Jan 18, 2026

How have regulators and consumer‑protection laws adapted to AI‑enabled fake endorsements?

Regulators have moved from warning to action: U.S. federal agencies—most visibly the Federal Trade Commission—have updated rules and launched enforcement against AI-enabled fake endorsements and revie...

Jan 13, 2026

What legal protections or liabilities exist for AI companies that monitor or 'report' suspected CSAM in user prompts or chat logs?

AI companies that scan user prompts or chat logs for suspected child sexual abuse material (CSAM) sit between clear statutory reporting duties and growing state-level AI and transparency requirements:...

Jan 7, 2026

Have any platforms disclosed automated reporting pipelines that send user prompts or chatlogs to law enforcement?

Public disclosures show major tech companies describe how they respond to legal demands and in limited cases automatically report specific illegal content (for example, suspected child sexual abuse ma...