Every time you ask ChatGPT a question — about your health, your finances, a sensitive work situation, a personal relationship — you're sharing that information with OpenAI's servers. Most people have a vague sense that AI assistants "use their data somehow," but the specifics are murkier than they should be. This guide covers exactly what ChatGPT collects, how OpenAI uses it, who else might see it, and what you can do to limit your exposure.
Everything here is based on OpenAI's current privacy policy, help documentation, and regulatory filings as of 2026. Where the rules differ by account type or geography, we've noted it explicitly.
What Data ChatGPT Collects
ChatGPT collects significantly more data than most users realize. OpenAI's privacy policy breaks this into several categories.
Conversation Content
The most obvious category: every message you send to ChatGPT, and every response it generates, is stored on OpenAI's servers. This includes the full text of your prompts, any documents or images you upload, and the complete conversation thread. If you've ever pasted in a contract, a medical report, a personal email, or a work document for ChatGPT to analyze — that content was transmitted to and stored by OpenAI.
This is not a hidden practice. OpenAI explicitly states in its privacy policy that it collects "the messages you send and receive when using our services." The question isn't whether this happens — it's what OpenAI does with it afterward, and for how long they keep it.
Account Information
If you have a ChatGPT account, OpenAI holds your name, email address, and payment information (for paid subscribers). Login information, account creation date, and subscription history are also retained. If you sign in via Google or Microsoft, OpenAI receives basic profile information from those providers.
Device and Usage Data
OpenAI automatically collects technical data about how you use ChatGPT: your IP address, browser type and version, operating system, device identifiers, session timestamps, the features you use, how often you use them, and performance diagnostics. This is standard for most web services, but worth noting because IP addresses can function as a proxy for your physical location even if you've never explicitly provided it.
Memory Entries (If Enabled)
ChatGPT's Memory feature — available on Plus, Pro, Team, and Enterprise plans — allows ChatGPT to remember facts about you across conversations. If you've enabled Memory, OpenAI stores a structured set of facts about you that persists indefinitely until you delete them. These can include your name, occupation, family situation, preferences, health conditions you've mentioned, or anything else ChatGPT has learned about you during past conversations.
Memory entries are stored separately from conversation history. Deleting a conversation does not delete memories extracted from it. You have to manage them independently at Settings → Personalization → Manage Memory.
Third-Party Integrations
When you use ChatGPT features that connect to external services — web browsing, plugins, custom GPTs with Actions — the data required to complete those requests flows through OpenAI's servers and out to third-party APIs. More on this in the Custom GPTs section below.
How OpenAI Uses Your Data
OpenAI uses conversation data for four primary purposes: service delivery, safety monitoring, product improvement, and model training. The first two are unavoidable. The latter two depend on which account type you have and what you've opted in or out of.
Service Delivery
This is table stakes: OpenAI needs your data to actually respond to your prompts. Nothing controversial here — without processing your input, there's no output.
Safety and Abuse Monitoring
OpenAI retains conversation logs for abuse detection and safety purposes. Specifically, to detect and investigate violations of their usage policies, to respond to law enforcement requests, and to maintain the safety of their systems. OpenAI's policy states that even users who have disabled chat history and opted out of training will have their conversations retained for 30 days for abuse monitoring purposes. This 30-day window is not optional.
Model Training (Default On for Free and Plus Users)
This is where it gets contentious. By default, OpenAI uses conversation data from free, Plus, and Pro personal accounts to train and improve its AI models. "Training" means your conversations — including the questions you ask, the documents you share, and the context you provide — may be reviewed by OpenAI employees and contractors, used to fine-tune model behavior, and incorporated into future versions of GPT.
OpenAI's public position is that they "de-identify and aggregate" this data before using it for training. Critics and regulators have questioned whether effective de-identification is achievable at scale, particularly for highly specific or contextually rich conversations. The Italian data protection authority concluded in 2024 that OpenAI's data processing for training lacked "an adequate legal basis" and violated GDPR transparency requirements — resulting in a €15 million fine.
Third-Party Sharing
OpenAI shares data with service providers who help operate their infrastructure (cloud hosting, analytics, customer support tools). They state that these providers are "contractually obligated to use it only for the specified service." OpenAI also shares data when required by law, court order, or to respond to valid legal process. In 2025, a U.S. federal court ordered OpenAI to preserve and retain substantial volumes of conversation logs as part of litigation — a stark illustration that "we'll delete your data" commitments have limits when courts get involved.
OpenAI does not sell your personal data to third parties for advertising purposes. This distinguishes it from social media companies and many ad-tech platforms, though the distinction matters less than it sounds if your conversations are reaching OpenAI employees and contractors during safety reviews and training data annotation.
Data Retention Policies
OpenAI's retention policies differ by account type and feature. Here's the current picture as of 2026:
| Account Type | Chat History Retention | After Deletion | Training Default |
|---|---|---|---|
| Free / Plus / Pro | Indefinite (until you delete) | Purged within 30 days | On (opt-out available) |
| Temporary Chat (any plan) | Auto-deleted within 30 days | N/A | Never used for training |
| Team / Enterprise / Edu | Admin-configurable | Purged within 30 days | Off (opt-in available) |
| API (Zero Data Retention) | Not stored after processing | N/A | Never used for training |
A few important notes on the table above:
The 30-day window is for post-deletion purge, not for abuse monitoring. Even if you've opted out of training and disabled chat history, OpenAI retains your conversations for 30 days after each session for abuse monitoring. This is separate from the 30-day purge window that starts when you manually delete a conversation.
Court orders can extend retention. In 2025, a U.S. federal court required OpenAI to preserve certain conversation logs beyond their normal deletion schedule. If your data falls within the scope of litigation-related preservation orders, your conversations may be retained longer than OpenAI's standard policy allows. This has since been resolved, but it illustrates that privacy policy commitments are subject to legal override.
Memory persists separately. If you've enabled the Memory feature, stored memories don't follow the conversation deletion schedule. A conversation you delete today may have already generated memory entries that persist until you manually clear them.
How to Opt Out of Model Training
OpenAI provides two mechanisms for opting out of model training on free and paid personal accounts. Both take effect for new conversations — they don't retroactively remove data that was already used in training.
Method 1: In-App Data Controls Toggle
- Log in to ChatGPT at chatgpt.com
- Click your profile icon in the top right corner
- Select Settings
- Go to the Data controls tab
- Find the toggle labeled "Improve the model for everyone"
- Switch the toggle off
- Confirm when prompted
Once disabled, new conversations will not be used to train OpenAI's models. Note that disabling this toggle also disables chat history — your past conversations remain accessible in your history, but new conversations won't be saved unless you re-enable the feature.
Method 2: Privacy Portal Request
- Go to privacy.openai.com
- Select "Don't train on my content"
- Log in with your OpenAI credentials
- Complete the opt-out form
- Receive confirmation via email
Method 3: Use Temporary Chat
For individual sensitive conversations, Temporary Chat mode provides per-session opt-out without changing your account-level settings. To use it:
- Click the ChatGPT dropdown at the top of the screen (next to the model name)
- Select "Temporary Chat"
- Any conversation started this way will not be saved to your history and will not be used for training
- Temporary chats are automatically purged from OpenAI's systems within 30 days
Temporary Chat is the most practical option for sensitive conversations — medical questions, legal matters, confidential work documents — where you want the model's capability but don't want that specific conversation persisted.
How to Delete Your ChatGPT Data
Delete Individual Conversations
- In the ChatGPT sidebar, hover over the conversation you want to remove
- Click the three dots (⋯) that appear to the right of the conversation title
- Select Delete
- Confirm deletion when prompted
Delete All Conversations at Once
- Click your profile icon in the top right corner
- Select Settings
- Under the General tab, find "Delete all chats"
- Click Delete all and confirm
After deletion, conversations are purged from OpenAI's systems within 30 days. Deleted conversations cannot be restored.
Delete Stored Memories
- Go to Settings → Personalization → Manage Memory
- Review each memory entry ChatGPT has stored about you
- Delete individual entries or click "Delete all" to clear everything at once
Export Your Data Before Deleting
Before deleting your account or all chats, you can download a copy of your data:
- Click your profile icon → Settings
- Go to Data controls
- Click Export data → Confirm export
- OpenAI will email you a download link, typically within minutes. The link expires after 24 hours
Delete Your OpenAI Account Entirely
- Go to OpenAI's account deletion page
- Log in, then follow the account deletion flow
- Account data is deleted within 30 days of the request
- Note: Account deletion is irreversible. If you have an active subscription, cancel it first
Submit a Data Deletion Request (GDPR / CCPA)
For a formal legal deletion request — covering data that may exist beyond your account (e.g., data used in training, safety logs, third-party processors):
- Visit privacy.openai.com
- Select "Remove my personal data from ChatGPT responses" or "Delete my account and data"
- Complete the identity verification process
- OpenAI is required to respond within 30 days (GDPR) or 45 days (CCPA)
ChatGPT Enterprise and Team vs. Free Tier: Privacy Differences
The privacy gap between free ChatGPT and the paid business tiers is substantial. If you're using ChatGPT for work, the tier you're on matters significantly.
Training Data Policy
Free / Plus / Pro accounts: Your conversations are used for training by default. You can opt out, but it's not the default state — and most users never do.
ChatGPT Team, Enterprise, Edu, Healthcare: OpenAI explicitly states it does not train on inputs or outputs from these plans by default. Period. This is a contractual commitment, not a setting you have to find and toggle. The only way OpenAI trains on business plan data is if the account administrator explicitly opts in.
Admin Controls
Enterprise plans include a workspace admin console with capabilities that don't exist on personal accounts:
- Centrally manage data sharing settings across all users in the workspace
- Configure data retention policies
- Enable or disable specific integrations and custom GPTs
- Export compliance and usage logs
- Set domain-level SSO and identity management rules
- Apply IP allowlists to restrict access geographically
Encryption and Security
ChatGPT Enterprise includes SOC 2 Type II compliance, end-to-end encryption, and optional Enterprise Key Management (EKM) — where customers control their own encryption keys. This provides a meaningful technical control layer over data access that free accounts don't have.
Data Residency
Enterprise customers can select data residency options to ensure conversation data is processed in specific geographic regions (relevant for GDPR compliance for EU-based organizations). This option isn't available to free or personal paid users.
| Feature | Free / Plus / Pro | Team | Enterprise |
|---|---|---|---|
| Training on your data | On by default (opt-out available) | Off by default | Off by default |
| Admin controls | None | Basic workspace controls | Full admin console |
| SOC 2 compliance | No | Yes | Yes |
| Encryption key management | No | No | Optional (EKM) |
| Data residency options | No | No | Yes |
| Custom retention policies | No | No | Yes |
What Happens to Data Shared With Custom GPTs and Actions
Custom GPTs and GPT Actions introduce a data flow that many users don't fully understand — and it's one of the higher-risk areas in the ChatGPT ecosystem.
Custom GPTs With No External Connections
If you're using a custom GPT that doesn't call any external APIs — just a system prompt with different behavior — your conversation stays within OpenAI's normal data handling. The same privacy policies that apply to standard ChatGPT apply here. The data goes to OpenAI, and only OpenAI.
Custom GPTs With Actions (External API Calls)
This is where it gets more complex. When a custom GPT uses "Actions" — calls to external APIs or third-party services — your input data leaves OpenAI's systems and reaches those third-party servers. The critical limitation: OpenAI explicitly states it does not audit or control how third-party Action providers handle, store, or use the data they receive.
What this means in practice:
- When a GPT Action triggers, the relevant portions of your conversation are sent to the third-party API
- That third party has its own privacy policy, retention practices, and data security standards — which may or may not be as rigorous as OpenAI's
- OpenAI's privacy guarantees stop at their API boundary
- A 2024 analysis of common GPT Action providers found that default log retention at third-party services commonly ranged from 7 to 30 days — but this varied widely and wasn't always disclosed
The Uploaded Knowledge Files Risk
When a custom GPT creator uploads files to give their GPT "knowledge" — a company FAQ, a product catalog, proprietary documentation — those files are stored in OpenAI's systems. Users who interact with that GPT may be able to extract portions of those files through carefully crafted prompts. Palo Alto Networks' research found that 95% of Custom GPTs had inadequate protections against prompt injection attacks, including data extraction. This is primarily a risk for GPT creators, but it illustrates that the security model for custom GPTs has significant gaps.
Best Practice for Custom GPT Privacy
Before using a custom GPT, especially one created by an unknown third party:
- Check whether the GPT uses Actions by looking for the "Actions" indicator in the GPT's info panel
- Read the GPT creator's privacy policy if one is provided (required for GPTs that collect personal data, though enforcement is limited)
- Avoid sharing sensitive personal information with any custom GPT that connects to external services
- For enterprise deployments, use admin controls to whitelist approved GPTs and block unknown ones
OpenAI's Privacy Controversies and Incidents
OpenAI has accumulated a significant regulatory and legal track record on privacy issues. Here are the most consequential incidents:
Italy's 2023 Temporary Ban
Italy became the first country to temporarily ban ChatGPT in March 2023. Italy's data protection authority (Garante) cited the absence of legal basis for collecting and processing personal data for training purposes, lack of age verification to prevent minors from accessing the service, and OpenAI's failure to notify the Garante of a March 2023 security breach that exposed user email addresses, payment information (last four digits of credit cards), and conversation titles. ChatGPT was reinstated in late April 2023 after OpenAI implemented changes, including a privacy notice for Italian users and an opt-out mechanism.
The €15 Million GDPR Fine (2024)
In December 2024, Italy's Garante issued OpenAI a €15 million fine following a formal investigation into ChatGPT's data practices. The violations cited: processing personal data for training without adequate legal basis, failing to meet GDPR transparency requirements, inadequate age verification, and the unnotified 2023 security breach. OpenAI called the fine "disproportionate" — noting it was nearly 20 times OpenAI's Italian revenue during the relevant period — and appealed. In March 2026, the Court of Rome annulled the fine, a significant victory for OpenAI in European courts, though the underlying regulatory tension remains unresolved.
March 2023 Security Breach
In March 2023, a bug in an open-source library (redis-py) caused a data exposure incident where some users could see another user's conversation titles, first and last messages, payment information (last four digits of card number and expiration date), and email addresses. OpenAI took ChatGPT offline for several hours to fix the issue. This was the breach Italy's Garante determined OpenAI failed to report promptly.
The New York Times Litigation Data Demands (2025)
In 2025, as part of copyright litigation with The New York Times, a federal court ordered OpenAI to preserve substantial volumes of output log data — including conversations that would otherwise have been subject to user deletion or automatic purging. OpenAI pushed back publicly, arguing that preserving this data conflicted with its privacy commitments to users. The preservation order was ultimately lifted in late September 2025, but the episode demonstrated that court orders can override privacy policy commitments and create data retention that users have no control over or visibility into.
Samsung's Confidential Data Leak (2023)
In a widely cited 2023 incident, Samsung engineers pasted proprietary source code and internal meeting notes into ChatGPT for assistance with debugging and documentation. The content included sensitive business information. Samsung subsequently banned ChatGPT company-wide after learning the data had been transmitted to OpenAI's servers. This incident triggered the broader enterprise conversation about what should and shouldn't be shared with AI assistants — it's not a breach by OpenAI, but an illustration of how data sharing works: if you type it, OpenAI has it.
GDPR and CCPA Rights as Applied to ChatGPT
EU Residents: GDPR Rights
EU residents have the most robust legal rights with respect to ChatGPT data:
- Right of access: You can request a copy of all personal data OpenAI holds about you. Submit via privacy.openai.com. OpenAI must respond within 30 days.
- Right to erasure: You can request deletion of your personal data. OpenAI must comply unless they have a legal obligation to retain it (e.g., active litigation). Training data presents a practical challenge here — once data is incorporated into model weights, technical erasure isn't possible in the conventional sense.
- Right to restrict processing: You can ask OpenAI to stop processing your data for specific purposes (such as model training) while retaining the data.
- Right to data portability: You can export your data in a machine-readable format using the Export Data feature in Settings → Data controls.
- Right to object: You can object to processing based on legitimate interests, including profiling.
OpenAI has faced significant regulatory scrutiny for how it handles these rights, particularly around training data. The Italian fine centered in part on whether OpenAI had an "adequate legal basis" for using personal data for training at all. Other European data protection authorities have also opened investigations. This remains an evolving area of EU regulatory enforcement.
California Residents: CCPA / CPRA Rights
California residents have enforceable rights under the CCPA and CPRA:
- Right to know: You can request disclosure of the categories of personal information collected, the purposes for collection, and whether it's shared with third parties.
- Right to delete: You can request deletion of your personal information. OpenAI must comply within 45 days, subject to legal and operational exceptions.
- Right to opt out of sale or sharing: OpenAI states it does not sell personal information. "Sharing" for targeted advertising purposes also does not apply to OpenAI's consumer products.
- Right to correct: You can request correction of inaccurate personal information.
- Right to limit use of sensitive personal information: For sensitive data categories (health, financial, precise location), you have the right to restrict how OpenAI uses that data.
Other US States
As of 2026, Virginia, Colorado, Connecticut, Texas, Nevada, and several other states have comprehensive privacy laws that provide similar rights. The mechanism and timelines vary. No federal comprehensive privacy law covering AI data practices has passed as of mid-2026, though the American Privacy Rights Act has advanced in committee.
How to Submit a Formal Privacy Request to OpenAI
- Go to privacy.openai.com — this is OpenAI's official privacy center
- Select your request type: access, deletion, opt-out of training, or correction
- Complete identity verification (required for data access and deletion requests)
- OpenAI will respond within 30 days (GDPR) or 45 days (CCPA); complex requests may be extended by an additional 30 days with notice
- If you have questions or your request is unresolved, email privacy@openai.com directly
ChatGPT vs. Gemini, Claude, and Copilot on Privacy
No AI assistant is privacy-neutral — they all collect and process data. But the policies and defaults differ meaningfully. Here's how the major players compare as of 2026.
| Feature | ChatGPT (OpenAI) | Gemini (Google) | Claude (Anthropic) | Copilot (Microsoft) |
|---|---|---|---|---|
| Free tier: trains on your data? | Yes (opt-out available) | Yes (opt-out available) | No (free tier conversations are 0-day retained) | Varies by account; consumer data used to improve |
| Business tier: trains on your data? | No (by default) | No (Workspace plans) | No | No (M365 Copilot) |
| Data residency options | Enterprise only | Google Workspace (regional) | Limited | Yes (M365 compliant) |
| Regulatory incidents | Italy fine (annulled 2026), breach 2023 | Irish DPC investigations ongoing | None major as of 2026 | EU DPA concerns re: Microsoft data practices |
| Privacy ranking (Incogni 2026) | 2nd of major AI assistants | 3rd | 1st (strongest privacy posture) | 3rd |
Where Claude Stands Out
Anthropic's Claude is generally considered the strongest on privacy for consumer use. Free-tier Claude conversations are not used for model training by default — a policy distinction that Anthropic markets explicitly. Claude's business and API tiers provide additional contractual guarantees. However, Claude's functionality (particularly for third-party integrations) is more limited than ChatGPT's, which is relevant if you need the GPT plugin ecosystem.
Where Copilot's Integration Matters
Microsoft's Copilot, when accessed through a Microsoft 365 subscription, benefits from Microsoft's existing enterprise security and compliance infrastructure. Your data stays within Microsoft's Azure boundary, is governed by your existing Microsoft licensing terms, and benefits from Microsoft's extensive compliance certifications (ISO 27001, SOC 2, HIPAA, FedRAMP). For organizations already in the Microsoft ecosystem, Copilot may offer the cleanest privacy story through existing contractual coverage.
Where Gemini Presents Risks
Google's approach to data is tied to its broader advertising business, which creates structural privacy concerns that don't apply to OpenAI, Anthropic, or Microsoft. Gemini for personal Google accounts is reviewed by Google for safety and product improvement, and Google has broad rights to use data across its product ecosystem. Gemini for Google Workspace (business accounts) provides stronger protections, but free-tier Gemini is the weakest option among the major players on privacy.
Frequently Asked Questions
Does ChatGPT share your conversations with other users?
No — your conversations are not shared with other ChatGPT users or shown in other people's chat interfaces. The concern is about OpenAI employees and contractors potentially reviewing conversations as part of safety monitoring and training data annotation, not about conversations being publicly accessible or shared between users. The March 2023 bug that briefly exposed conversation titles to other users was exactly that — a bug, not a feature — and was patched within hours.
Is ChatGPT safe to use for confidential work?
That depends on what you mean by "confidential" and which tier you're on. For free and Plus users, anything you share with ChatGPT can be reviewed by OpenAI personnel and may be used in training. Treating ChatGPT like a colleague you share sensitive documents with — because it might end up in training data — is the right mental model. For Enterprise users with a signed Data Processing Addendum, the privacy guarantees are substantially stronger and more comparable to enterprise SaaS tools. The Samsung incident (where engineers pasted proprietary code into ChatGPT) is the canonical cautionary tale for work use.
What happens to my ChatGPT data if OpenAI is acquired or goes bankrupt?
OpenAI's privacy policy states that in the event of a merger, acquisition, or sale of assets, your personal data may be transferred to the acquiring company, subject to the same privacy policy commitments. This is standard for most technology companies. There are no special protections that prevent your data from becoming an asset in a corporate transaction. If this is a concern, deleting your account and its associated data before any such event is the only concrete protection available.
Can OpenAI train on data from conversations I've already deleted?
Potentially yes, if those conversations were used in a training run before you deleted them. Deletion removes the raw conversation data from OpenAI's systems within 30 days, but doesn't undo any model training that already occurred. Once data is incorporated into model weights — which happens on a regular basis through OpenAI's training pipelines — it can't be removed retroactively. This is one of the core technical limitations of "right to be forgotten" as applied to AI training data, and it's the subject of ongoing regulatory debate in the EU.
Is ChatGPT HIPAA compliant?
Standard ChatGPT (free, Plus, Pro) is not HIPAA compliant and should not be used for any protected health information (PHI). OpenAI offers a ChatGPT for Healthcare tier that includes a Business Associate Agreement (BAA) — the contract required for HIPAA compliance — and additional data handling controls. Without a signed BAA, using any AI service for PHI creates significant HIPAA liability. If you're in healthcare, the tier matters enormously: standard ChatGPT is a compliance risk for any clinical or patient data use case.
Does ChatGPT remember information between sessions?
Only if you've enabled the Memory feature. Without Memory enabled, each conversation starts fresh — ChatGPT has no recall of previous sessions. With Memory enabled, ChatGPT extracts facts from conversations and stores them as persistent entries that carry across sessions. These memory entries accumulate over time and can contain surprisingly detailed personal information. Review and manage them at Settings → Personalization → Manage Memory. If you'd rather ChatGPT not accumulate a profile of you, keep Memory disabled — it's off by default for new accounts.
What's the safest way to use ChatGPT for sensitive topics?
Three practices combined provide the strongest protection without giving up ChatGPT's capabilities: (1) Use Temporary Chat mode for any conversation you wouldn't want stored — medical questions, financial planning, legal matters, confidential work discussions. (2) Opt out of model training in Settings → Data controls, even for conversations that aren't especially sensitive. (3) Never paste documents or files containing genuinely confidential information — account numbers, SSNs, patient data, trade secrets — into any AI assistant. The 30-day abuse monitoring window means even Temporary Chats aren't zero-retention, but they're substantially better than standard history mode.