How have app stores and messaging services responded to reports of ‘nudify’ apps and channels distributing AI-generated sexual content?
Executive summary
App stores have taken a patchwork approach: both Apple and Google have removed some “nudify” apps in past waves of enforcement but still host dozens of offending apps according to recent watchdog reporting, prompting fresh demands from lawmakers to pull problematic offerings such as X/Grok [1] [2]. Messaging and social platforms have moved more defensively — some AI image features have been restricted or disabled for many users, but critics and regulators say those steps are inconsistent and too slow given the volume of non‑consensual sexual imagery being generated and shared [3] [4] [5].
1. App stores: selective takedowns, new rules, and a credibility gap
Apple and Google have policies forbidding overtly sexual or non‑consensual content and in previous years removed a number of “nudify” and AI image‑editing apps after media investigations, yet watchdogs recently found dozens of such apps still available across both stores — a finding that has fueled accusations of uneven enforcement and prompted senators to ask Apple and Google to remove X and Grok from their stores [1] [6] [2] [5].
2. Platform policy shifts: clearer developer obligations but enforcement lagging
Both companies have updated guidelines and review processes that, on paper, tighten age controls and require transparency about sharing data with third‑party AI systems — changes Apple formalized in revised App Review Guidelines in late 2025 and related policy updates that require disclosure and age‑gating for AI features [7] [8] [9]. Those rule changes give regulators and critics a lever for action, but reporting shows apps that skirt rules or relist under new names continue to surface, undercutting practical effectiveness [1] [6].
3. Messaging services and AI chatbots: immediate throttles, partial feature removals
When Grok and other AI tools were demonstrated generating large volumes of sexualized images — including apparent images of minors — platforms and services cut back capabilities: xAI/Grok turned off or limited image‑generation for most users and moved to restrict editing of user‑uploaded images after global outcry [3] [4]. That reactive constraining illustrates a common pattern: platforms remove functionality that becomes toxic in public, rather than preemptively restricting risky model behaviors [3] [10].
4. Lawmakers and regulators: pressure, requests, and looming liability
Democratic senators demanded Apple and Google delist X and Grok over the spread of non‑consensual sexual images, and regulators across the UK, EU and other jurisdictions have publicly condemned the production of sexualized images generated by AI, with the EU already ordering retention of internal documents related to Grok under the Digital Services Act [5] [1] [10]. Meanwhile, new laws such as U.S. federal and state measures — including the TAKE IT DOWN Act and state bans extending to AI‑manipulated imagery — create deadlines for notice‑and‑removal processes and potential platform liability beginning in mid‑2026 [11] [12].
5. Tensions and incentives: app store revenue, moderation costs, and geopolitical flags
The economics complicate enforcement: the apps identified in watchdog reporting collectively account for large download and revenue numbers that benefit app stores through cuts of in‑store purchases, a fact critics point to when arguing for more aggressive removals [2]. National security and provenance concerns also surfaced in reporting that some offending apps originated from particular countries, a claim used by some outlets to frame the problem as not only a safety issue but a security one [13]. Platform defenders point to technical constraints — distinguishing malicious use from legitimate creativity and policing freshly generated imagery at scale — as partial explanations for imperfect responses [14] [15].
6. Bottom line: partial progress, major gaps, and what to watch next
App stores and messaging services have moved from denial to selective enforcement: guidelines tightened, some apps removed, and high‑profile AI features curtailed, but reporting and regulators say the measures are uneven and reactive while legal regimes and mandatory notice‑and‑removal obligations are about to ratchet up the pressure on platforms to act faster and more comprehensively [1] [3] [11]. The coming months will test whether policy changes, enforcement resources, and new legal liabilities close the gap between rules on paper and safety in practice — or whether bad actors will keep exploiting model outputs and distribution loopholes faster than platforms can stamp them out [6] [10].