Does OpenAI retain conversation logs for model training and can users opt out?
Executive summary
OpenAI currently retains and—under a 2025 U.S. court preservation order—was required to keep deleted and temporary ChatGPT logs dating to mid‑May 2025, though later court actions narrowed that scope and OpenAI says it has returned toward standard retention practices after September 26, 2025 [1] [2] [3]. Separately, users can opt out of having their new conversations used to train OpenAI’s models via account Data Controls or a privacy portal; that opt‑out prevents training use but does not erase history or eliminate other forms of retention described in OpenAI policies and the help center [4] [5] [6].
1. A court order forced OpenAI to preserve chats — temporarily and broadly
In May 2025 a federal magistrate ordered OpenAI to “preserve and segregate all output log data that would otherwise be deleted,” which led OpenAI to preserve deleted and “temporary” chat logs starting in mid‑May as part of discovery in The New York Times‑led litigation [1] [7]. Reporting and legal commentary described the order as sweeping and unprecedented because it required retention of conversations users believed they had deleted [8] [9].
2. The preservation mandate was later narrowed and OpenAI says standard rules resumed
Subsequent filings and rulings changed the scope: commentators note a cutoff after which the broad hold stops applying (September 26, 2025), and later orders narrowed what must be preserved, allowing OpenAI to return toward its normal deletion flows for new data outside specific preserved sets [3] [2]. OpenAI’s public response and help pages reflect a move back to the company’s standard retention statements while acknowledging it had to store limited historical April–September 2025 data for litigation [2].
3. Opting out of training is available — but it’s not the same as deletion
OpenAI provides a clear control that stops using a user’s new conversations to train models: users can switch off “Improve the model for everyone” in Profile → Settings → Data Controls, or use the privacy portal’s “do not train on my content” option; OpenAI states new conversations will not be used for training once that is set [4] [5] [10]. Help pages stress that opting out affects training use, while chat history remains visible in a user’s account [11] [10].
4. Retention windows, backups and exceptions mean data can still persist
OpenAI’s help documentation warns that deleted conversations and files can remain in internal backups for up to 30 additional days after they become unavailable via the Compliance API, and legal obligations can require longer retention; the preservation order itself was an explicit example of a legal exception overriding normal deletion [6] [2]. Independent observers and legal advisories highlighted that preservation for litigation could conflict with industry norms and user expectations [8] [12].
5. Business and API customers have different defaults and options
OpenAI says business customers (ChatGPT Business/Enterprise, API) are by default not used for training and have different retention controls; API users can use “Zero Data Retention” or configure retention periods per endpoint, and OpenAI states API inputs/outputs are removed from logs after 30 days unless legally required to retain them [2] [10] [11]. That contrast matters for organizations deciding where to run sensitive workloads [10] [11].
6. Tension between user controls, legal orders and practical limits
Sources present two competing realities: OpenAI’s user controls let individuals opt out of training and manage chat history, but a court preservation order showed legal process can compel preservation despite user deletion choices, creating a “privacy nightmare,” per OpenAI’s own filings and coverage [1] [13]. Commentators warned that preservation orders can force companies to retain data in ways that undermine public promises and create compliance conflicts [12] [9].
7. What reporting does not say (limits of current sources)
Available sources do not mention a comprehensive audit showing exactly which individual accounts or timeframes remain preserved today beyond the April–September 2025 window referenced by OpenAI, nor do they provide a public, itemized list of preserved records accessible to outside observers (not found in current reporting). Sources likewise do not document any technical proof that opting out guarantees removal from all internal, non‑training logs in every jurisdiction (not found in current reporting).
Takeaway: you can opt out of OpenAI using your new chats to train models via documented settings and a privacy portal [4] [5], but legal orders and backup/retention mechanics mean deleted chats have, in at least one period in 2025, been preserved against user deletion requests — and a narrowed court approach later reduced but did not entirely eliminate those preservation obligations [1] [3] [2].