Keep Factually independent

Whether you agree or disagree with our analysis, these conversations matter for democracy. We don't take money from political groups - even a $5 donation helps us keep it that way.

Loading...Time left: ...
Loading...Goal: $500

Fact check: Does ChatGPT share user data with third-party companies?

Checked on October 3, 2025

Executive Summary

OpenAI's stated policy is that ChatGPT does not proactively share user data with third-party companies, and users can control data retention settings; however, several legal orders, regulatory findings, and disclosed vulnerabilities have created credible pathways through which user chats could be retained or exposed to outsiders. Recent developments include a U.S. court order in New York Times v. OpenAI that may force retention of chat logs, security research showing plugin and mode-level exploits that can exfiltrate data, and regulatory action in Italy finding inadequate legal basis and transparency — collectively indicating that risk of third-party access is non-zero and context-dependent [1] [2] [3] [4] [5] [6] [7].

1. Court Orders and the Real Risk of Forced Retention — When Law Compels Data Sharing

A June 2025 court order in litigation brought by The New York Times requires OpenAI to retain certain user chat logs, including previously deleted chats, pending legal proceedings; OpenAI is actively challenging the order but acknowledged the possibility of court-mandated retention that could expose logs to legal processes or discovery [1] [8]" target="blank" rel="noopener noreferrer">[8]. This creates a pathway where data that users assumed deletable might be preserved and become accessible to other parties via legal mechanisms. The immediate significance is procedural: retention does not equate to commercial sharing, but it **materially increases the chance** that third parties could gain access through subpoenas, discovery, or court-authorized disclosure if legal processes demand it [1] [2].

2. Security Flaws That Could Let Data Leak to External Services — Real-world Exploits

Multiple security research teams have reported vulnerabilities enabling third-party access by compromising account-level controls or embedding hidden commands; Salt Labs detailed a flaw allowing malicious plugins to be installed without user consent and exfiltrate data to external services, while earlier plugin research and Radware's "ShadowLeak" disclosure showed exploits that can siphon sensitive content to outside servers via GitHub integrations or hidden HTML in email workflows [3] [4] [5]. These technical vectors are distinct from intentional corporate sharing: they represent adversarial paths where attackers or malicious third-party plugins can capture user content, meaning users relying on platform assurances face operational risk unless mitigations and audits are robustly enforced.

3. Regulatory Findings That Question Legal Basis and Transparency — Italy's Enforcement Example

Italy's data protection authority concluded that ChatGPT lacked a proper legal basis for processing certain user data and fined OpenAI €15 million for transparency and data-processing shortcomings, saying users were not adequately informed about how their data might be used or shared [6] [7]. That administrative finding indicates regulators view aspects of OpenAI's practices as insufficiently transparent and potentially enabling undisclosed data flows. The decision demonstrates that even absent explicit third-party sharing, shortcomings in lawful basis and notice create regulatory pressure and legal uncertainty about what data practices are permissible and what disclosures are required.

4. Company Statements Versus Operational Reality — Opt-outs, Deletions, and Limits

OpenAI publicly states it prioritizes user privacy, offers controls for opting out and permanently removing chats, and contends it does not sell user data to third parties for profiling, but it also acknowledges possible exceptions and legal constraints such as court orders that may compel retention [1]. The tension between policy promises and exceptions for legal compulsion or security incidents means users should treat the platform's default assurances as conditional: opt-outs and deletions are meaningful under normal operations, yet their effectiveness can be limited by litigation demands or by breaches exploiting plugin vulnerabilities [1] [2] [3].

5. Practical Impact for Users — When Sensitive Information Is at Risk

For individuals and organizations handling sensitive material, the converging risks — legal retention orders, exploitable plugins, and regulatory findings — translate into a practical need for mitigations: avoid sharing confidential data in prompts, use enterprise plans with contractual safeguards if available, monitor security advisories, and segregate sensitive workflows from public or plugin-enabled accounts. Shared chats surfacing in search results and retained deleted chats raise real confidentiality concerns, particularly for lawyers and corporate staff who were flagged in recent warnings as facing heightened exposure risks [2] [4] [5].

6. Where Accountability and Fixes Matter Most — Audits, Patch Management, and Legal Strategy

Addressing the combined legal and technical exposures requires multi-pronged action: platforms must accelerate patching and plugin vetting to reduce exploitability, regulators must clarify lawful bases and transparency standards, and litigants should consider narrow discovery orders to limit broad retention. OpenAI's litigation strategies and public commitments to challenge broad orders are meaningful steps, but systemic risk persists until technical patches, stronger plugin governance, and clearer regulatory-compliance mechanisms are in place [1] [3] [7].

7. Bottom Line: Not a Simple Yes/No — Conditional Exposure Depends on Law, Bugs, and Behavior

The definitive claim that "ChatGPT shares user data with third-party companies" is not uniformly true in the sense of routine commercial sharing, but multiple independent pathways — court orders, exploitable plugins/modes, and regulatory findings of inadequate transparency — create realistic scenarios where user data can be retained, accessed, or leaked to third parties. Users should treat platform privacy guarantees as contingent and adopt operational precautions while monitoring ongoing legal and security developments [1] [2] [3] [4] [5] [6] [7].

Want to dive deeper?
What type of user data does ChatGPT collect?
How does ChatGPT ensure user data privacy and security?
Does ChatGPT comply with GDPR data protection regulations?
Can users opt-out of data sharing with third-party companies on ChatGPT?
What are the consequences of ChatGPT sharing user data with third-party companies?