Index/Organizations/Unilever Foods Innovation Centre

Unilever Foods Innovation Centre

Research facility in Wageningen, the Netherlands

Fact-Checks

8 results
Dec 1, 2025
Most Viewed

How do automated tools identify and flag CSAM content on the internet?

Automated CSAM detection on internet platforms uses two main technical pillars: hash‑matching to find previously verified content (e.g., PhotoDNA, perceptual/fuzzy hashes) and machine‑learning classif...

Jan 29, 2026
Most Viewed

How do commercial CSAM-detection tools (Thorn, Hive) work and how effective are they on AI-generated images?

Commercial -detection products from and combine traditional hash‑matching against known illicit files with machine‑learning classifiers that operate on image embeddings and text classifiers to surface...

Jan 29, 2026
Most Viewed

What measures is Grok / xAI taking to detect CSAM generated by AI?

says it has added multiple technical and policy layers to detect and block AI-generated child sexual abuse material (), including semantic intent analysis, a visual classifier for biometric markers, e...

Jan 20, 2026

How does X/xAI describe its process for reporting CSAM to NCMEC and what logs exist of those reports?

There are no provided sources that describe X or xAI’s CSAM reporting workflow specifically; available reporting instead documents how electronic service providers (ESPs) typically detect, report and ...

Feb 4, 2026

Has X implemented AI classifiers to detect novel or AI-generated CSAM beyond hash matching?

There is clear, repeated reporting that industry vendors and non‑profits have developed AI classifiers able to flag novel and beyond traditional (, , , ) . However, none of the provided documents say ...

Feb 2, 2026

What mechanisms and quality controls do platforms use to reduce false positives before reporting CSAM to NCMEC?

Platforms rely on layered technical and human checks—hash-based matching to known , machine‑learning classifiers for novel material, and human moderation/triage—plus engineered reporting workflows and...

Jan 2, 2026

How does the National Center for Missing & Exploited Children handle reports of AI‑generated CSAM and which companies filed reports in 2024–2025?

The National Center for Missing & Exploited Children (NCMEC) treats AI‑generated child sexual abuse material (AIG‑CSAM) as CSAM, routing reports through its CyberTipline, attempting to triage, identif...

Dec 15, 2025

How are major platforms detecting and moderating AI-generated CSAM in 2025?

Platforms in 2025 combine legacy hash‑matching with new AI classifiers, third‑party moderation vendors, industry signal‑sharing and legal reporting obligations to detect and remove AI‑generated CSAM; ...