Is the eu investigation of x focused on users or the platform
Executive summary
The EU’s inquiries into X (formerly Twitter) are primarily aimed at the platform — its design choices, transparency, ad practices, verification systems and AI tools — under obligations set by the Digital Services Act, not criminally prosecuting individual users [1] [2]. That said, the probes do engage with how user content and behavior create risks (illegal content, targeted ads, child safety), so the regulatory focus is on whether X’s systems create or fail to mitigate those risks rather than on policing individual posters [3] [4].
1. Platform obligations under the Digital Services Act: enforcement against X, not users
The EU launched its first DSA probe into X in December 2023 and has framed investigations around platform-level duties — transparency of ads, access for researchers, verification practices and systemic risk management — leading to formal findings and fines aimed at the company itself rather than individual account-holders [1] [2] [5]. The Commission’s non-compliance decision and subsequent €120m fine explicitly cite deceptive blue ticks, opaque ad repositories and blocked researcher access as failures of the service provider to meet DSA requirements, which are obligations imposed on platforms as actors, not on ordinary users [2] [5] [6].
2. Content and user-harm issues are framed as systemic platform failures
When the EU raises alarms about illegal or harmful content — from posts after Hamas’ attacks to sexualized deepfakes created with X’s Grok AI — its concern is whether X’s systems, policies, and remediation processes adequately prevent or respond to those harms, a regulatory lens that treats user-generated content as evidence of platform-level compliance gaps [1] [4] [7]. The Commission’s inquiries have requested internal documents, inspections and fixes for tools such as Grok because the risk is that the platform’s features enable or fail to stop widespread illegal content, not to sanction individual posters via EU action [3] [4].
3. Data-driven advertising complaints straddle user targeting and platform practice
Civil society complaints lodged in 2025 allege X enabled targeted advertising using sensitive categories inferred from users’ behaviour — a claim that targets platform ad delivery systems and auditing/reporting failures rather than individual advertisers or users’ private choices [8] [9]. EU regulators were urged to investigate whether X’s ad systems violate the DSA and GDPR by enabling exclusionary or sensitive-data-based targeting, and the evidence cited by groups like AI Forensics centers on what the platform allowed advertisers to do and how it disclosed (or hid) that data, underscoring a platform-centric enforcement approach [8] [9].
4. Political pushback highlights perceptions that the probe targets speech, but evidence points to technical and transparency rules
High-profile critics in the U.S. and some Italian politicians have framed EU action as an attack on free speech or U.S. companies, but EU officials and multiple analyses stress that the enforcement relates to technical compliance, transparency and consumer protection obligations under the DSA — for example, the ‘deceptive’ verification system and ad transparency failures cited in rulings [1] [10] [6]. Observers such as TechPolicy.Press argue the EU’s fines are about enforcing “boring” operational rules rather than dictating what users may say, though political narratives claiming censorship remain prominent and politically motivated [10] [11].
5. Bottom line: the EU targets the platform’s systems and governance; user conduct is implicated only insofar as it reveals systemic failures
Across Commission actions, industry reporting and civil society complaints, the pattern is consistent: the EU investigates X for how its policies, algorithms, verification and ad systems create risks or fail transparency tests — remedies and fines are directed at the company and its compliance measures, while content- or user-level harms are the triggers that reveal platform-wide shortcomings [5] [2] [3]. Where national prosecutors or communications regulators step in over criminal content or child safety, those are parallel actions addressing illegal user posts, but the Commission’s DSA enforcement remains squarely focused on the platform duties and governance model [7] [4].