ChatGPT is not private by default. The Free, Go, Plus, and Pro tiers train on your conversations unless you opt out. Business and Enterprise do not. None of them gives you the contractual confidentiality the legal profession means by "privileged." If you are evaluating whether ChatGPT is private enough for in-house legal work, that distinction is the whole conversation.
The question that matters is what you can trust ChatGPT with, and what you cannot.
The Short Answer: Is ChatGPT Private?
ChatGPT is not private in the way in-house counsel or a lawyer use the word. On Free, Go, Plus, and Pro, OpenAI uses your conversations to improve the model unless you toggle the setting off. On Business and Enterprise, OpenAI excludes your workspace data from training by default. Across every tier, OpenAI retains conversations on its servers for at least 30 days under its abuse-monitoring policy, including chats you have deleted and chats you ran in Temporary Chat mode.
Privacy in the consumer-software sense and confidentiality in the legal sense are different bars. Toggle training off and ChatGPT stops feeding your prompts into the next model. The contractual confidentiality and counsel-directed framework a privileged legal workflow requires still live one tier up, in Business or Enterprise, and even there the privilege analysis is not settled.
What ChatGPT Collects, Stores, and Shares
OpenAI's ChatGPT collects account data, prompts, files, IP and device metadata, and usage telemetry. Open OpenAI's privacy policy and you'll see the full list. Account information, the prompts and files you submit, IP address and device data, and usage metadata all flow into OpenAI's systems. OpenAI stores conversations on its servers, and OpenAI personnel and contracted vendors can review them for safety, abuse, and quality assurance. Automated systems scan messages for policy violations before any human review.
Your data can also leave OpenAI through three exits:
Sub-processors. OpenAI uses vendors for hosting, analytics, and customer support, who process your data under contract.
Legal process. OpenAI may produce data in response to a subpoena, court order, or other valid legal demand.
Training pipelines. On consumer plans, your prompts can become training data for the next model generation unless you have toggled the setting off. The full policy is published at the OpenAI privacy policy page.
The 2026 advertising layer. On February 9, 2026, OpenAI began testing ads in the Free and Go tiers in the United States, which quietly added a second data-use layer on top of the training default. The ad system runs as its own pipeline, separate from training, with its own retention and its own ad-tech vendors reading signals from your conversations. For in-house teams, that means Free and Go now carry two privacy concerns for legal work: the training default, and the new ad layer running on top of it.
Two retention windows matter:
Standard chats. Live in your account history until you delete them. After deletion, they sit on OpenAI's servers for up to 30 days for abuse monitoring before removal. OpenAI documents this retention behavior in its chat retention policies.
Temporary Chats and deleted chats. Same 30-day server-side window. The user interface labels these chats as off the record. OpenAI's servers retain them for 30 days regardless.
One footnote on the 30-day window. OpenAI's stated retention policy is 30 days. From May 13, 2025 to September 26, 2025, a court preservation order in NYT v. OpenAI required OpenAI to retain Free, Plus, Pro, Team, and non-ZDR API conversations (including deleted chats and Temporary Chats) for the duration of that window. The obligation ended October 22, 2025, but data preserved during that window remains held. ChatGPT Enterprise, Business, and Edu workspaces were not subject to the order. For matters that depend on a clean retention story, ask the platform what it kept and when.
Privacy by Plan: Free, Go, Plus, Pro, Business, Enterprise, and API
ChatGPT runs seven plans in 2026, and the line that matters for in-house teams runs between consumer and business tiers. The four consumer tiers default to training on your data; the business and enterprise tiers do not.
Plan | Price (US) | Trains on Your Data by Default | Zero Data Retention Available | SOC 2 / SOC 3 / ISO | Default Retention | Best For |
|---|---|---|---|---|---|---|
Free | $0 | Yes (toggle off) | No | SOC 2 Type II (org-level) | Until deleted, then 30 days | General consumer use |
Go | $8/month | Yes (toggle off) | No | SOC 2 Type II (org-level) | Until deleted, then 30 days | General consumer use |
Plus | $20/month | Yes (toggle off) | No | SOC 2 Type II (org-level) | Until deleted, then 30 days | General consumer use |
Pro | $100 or $200/month | Yes (toggle off) | No | SOC 2 Type II (org-level) | Until deleted, then 30 days | Heavy individual use |
Business | $20/seat/month annual ($25 monthly) | No | Available on qualifying plans | SOC 2 Type II | Configurable by admin | Small and mid-size teams |
Enterprise | Custom | No | Available | SOC 2 Type II | Configurable by admin | Large workforces |
API | Pay-as-you-go | No | Available for qualifying organizations | SOC 2 Type II | 30 days unless ZDR negotiated | Developers and platform builders |
GC AI (legal AI) | $500/month | No | Yes, with OpenAI and Anthropic | SOC 2 Type II, SOC 3 | Configurable; logically isolated database per customer | In-house legal teams |
Is ChatGPT Plus Private?
ChatGPT Plus is a consumer tier and shares the same privacy defaults as Free, Go, and Pro. OpenAI uses Plus conversations to improve the model unless you toggle off "Improve the model for everyone" in Data Controls. Even with training off, conversations sit on OpenAI's servers under the 30-day abuse-monitoring window. Paying $20 a month does not change the underlying privacy story versus Free.
Is ChatGPT Pro Private?
ChatGPT Pro at $100 or $200 per month is also a consumer tier. Same Plus defaults: training on, no Zero Data Retention, 30-day server retention. Pro buys more Codex usage and GPT-5.4 Pro access. Privacy defaults stay identical to Plus.
Is ChatGPT Business and Enterprise Private?
ChatGPT Business at $20 per seat per month annual ($25 monthly, after OpenAI's April 2, 2026 price reduction) and ChatGPT Enterprise on a custom contract exclude workspace data from training by default. Both offer Zero Data Retention on qualifying terms. They give admins centralized control over retention, sharing, and exports. Neither tier is built around the legal-specific workflows or counsel-directed deployment in-house teams need.
Is the ChatGPT API Private?
The OpenAI API does not train on your inputs by default and offers Zero Data Retention for qualifying organizations. Inputs and outputs are retained for 30 days for abuse monitoring unless ZDR is negotiated. The API is the path most legal AI platforms (including GC AI) use to wire OpenAI models into a privacy-controlled stack.
Is ChatGPT HIPAA, GDPR, or CCPA Compliant?
ChatGPT Free, Plus, Pro, and Business are not HIPAA-eligible. OpenAI does not offer a Business Associate Agreement on those tiers. Sales-managed ChatGPT Enterprise and Edu accounts can secure a BAA, and OpenAI launched ChatGPT for Healthcare in January 2026 specifically for organizations needing HIPAA-aligned use. The OpenAI API supports BAAs through baa@openai.com for qualifying organizations. ChatGPT is GDPR-applicable for EU users on every tier, with stronger Data Processing Addendum terms on Business and Enterprise. ChatGPT is CCPA-applicable across every tier.
Temporary Chat: How "Private" Mode Works
Temporary Chat hides conversations from your history and from training, and OpenAI still retains them on its servers for up to 30 days. When you start a Temporary Chat, ChatGPT does not save the conversation in your history, does not use it to update memory, and (per OpenAI's stated policy) does not use it to train the model.
What Temporary Chat does not do is delete the conversation from OpenAI's servers in real time. Per OpenAI's published retention policy, the content sits on OpenAI's servers for up to 30 days under the abuse-monitoring framework before deletion, and remains producible under valid legal process during that window. The same policy applies to chats you have deleted from your history.
The gap that matters for in-house lawyers is between user expectation and platform reality. The user sees a session that disappears when they close the tab. The servers still hold the conversation for a month. Temporary Chat works for quick, low-stakes prompts. It does not create a confidential channel for legal work.
How to Make ChatGPT More Private (Settings Walkthrough)
Five settings reduce ChatGPT's privacy surface area: turn off model training, use Temporary Chat, delete old conversations, revoke shared links, and move team work into a Business or Enterprise workspace. None of them changes the underlying privilege analysis for legal work.
Toggle off "Improve the model for everyone." In ChatGPT settings under Data Controls, this single switch stops OpenAI from using your future conversations to train the model. It applies to your account only. It does not retroactively pull prior conversations out of training data.
Use Temporary Chat for one-off prompts. Temporary Chat will not appear in your history or update memory, with the 30-day server-side retention caveat from the section above.
Delete conversations you do not need. Deleted chats leave your history immediately and clear from OpenAI's servers within 30 days. The 30-day window still applies.
Revoke shared links you no longer want public. In Settings, navigate to Shared Links and click the trash icon on any link you want to revoke. This removes the public link going forward but does not retroactively remove cached copies on Google or third-party scrapers.
Move sensitive work to a Business or Enterprise workspace. A team-administered workspace excludes your data from training by default and gives an admin centralized control over retention, sharing, and exports. OpenAI encrypts workspace data in transit and at rest.
These five settings reduce ChatGPT's data footprint for general work. They do not change the privilege analysis. If your work involves contracts, board materials, regulatory filings, or anything else that should be privileged, settings tweaks fall short. A counsel-directed legal AI platform is what holds up.
Where ChatGPT Privacy Breaks Down for Legal Work
Three failure modes show up when in-house teams try to push ChatGPT into legal-grade workflows.
Default training on consumer tiers means client data flows into model improvement. A user has to actively toggle training off, which means the default setting on Free, Go, Plus, and Pro is that prompts and uploads can be used to improve the next model generation. Samsung learned this in 2023, when engineers pasted internal source code and meeting notes into ChatGPT in March and April. Samsung banned generative AI on company devices on May 1. The legal-team analog is a paralegal pasting a draft settlement memo into Plus to clean up the prose.
The terms of service create disclosure to a third party. Even when training is disabled, the conversation lives on OpenAI's servers, processed by automated systems and accessible to OpenAI personnel and contracted reviewers under the abuse-monitoring framework. That is disclosure outside the attorney-client relationship in the privilege sense, which is the second thing courts scrutinize when AI conversations get subpoenaed.
There is no counsel-directed framework. Privilege law treats AI use the way it treats any agent of the lawyer. When the lawyer directs the agent, picks the platform with confidentiality terms, and uses the agent for legal advice, privilege can extend. When a non-lawyer self-directs into a consumer chat platform, the framework breaks. A consumer ChatGPT account is not a counsel-directed agent.
What You Should Never Paste Into ChatGPT
What you can trust consumer ChatGPT with: a public FAQ rewrite, a CLE outline, a deposition-prep checklist with no names. What you cannot trust it with: anything that names a client, a counterparty, a deal, or a witness. The line is concrete, and it should be policy.
Six categories of content do not belong in a consumer ChatGPT account, regardless of tier:
Personally identifiable information about employees, customers, witnesses, or counterparties
Deal terms before public announcement, including pricing, indemnification, and earn-out structures
Settlement amounts and litigation strategy memos
Employee compensation data and internal HR matters
Witness statements, deposition prep notes, anything subject to a protective order
Anything covered by an NDA where the counterparty has not consented to AI processing
If you would not paste it into a public Google Doc, do not paste it into ChatGPT. The Google Doc test does more work than most security trainings.
ABA Opinion 512 and Model Rules 1.1 and 1.6
The ethics framework for lawyer use of generative AI rests on two ABA Model Rules and one Formal Opinion. ABA Formal Opinion 512, issued July 29, 2024, requires informed client consent before confidential client information is fed into a self-learning generative AI tool, unless no information relating to the representation is being input. Model Rule 1.1 Comment 8 imposes a duty of technological competence, requiring lawyers to keep abreast of the benefits and risks of relevant technology. Model Rule 1.6 governs disclosure to third parties, including OpenAI sub-processors and abuse-monitoring reviewers.
In combination, the three sources establish a working test. A lawyer using a generative AI platform for legal work has to be competent in how the platform handles data, has to keep client information confidential as Rule 1.6 defines it, and has to obtain informed consent if confidential information will flow into a tool that learns from inputs. ChatGPT consumer tiers do not pass that test cleanly. Business and Enterprise tiers can, with the right contract and the right deployment. A purpose-built legal AI platform is engineered to pass it from day one.
Privilege, Confidentiality, and the Heppner Question
Conversations with consumer-grade generative AI did not survive the attorney-client privilege test in United States v. Heppner (SDNY, Feb 17, 2026), the first federal ruling of first impression on the question. Judge Jed Rakoff held that exchanges between a defendant and a generative AI platform were neither privileged nor work product.
The court built its reasoning on three points: the AI platform is not an attorney, the platform's terms permitted training and disclosure, and the defendant used the platform without counsel directing the use. As a district court decision, Heppner is not binding precedent on any other court, including other judges in SDNY. It is highly persuasive authority, especially in SDNY and the Second Circuit, and courts elsewhere are likely to look to Rakoff's reasoning.
Heppner involved Claude. The same three-part framework applies to any ChatGPT exchange a court reviews. A consumer ChatGPT account would face the same questions Rakoff asked: is the platform an attorney, do the terms permit training and disclosure, and was use directed by counsel. A Business or Enterprise workspace, deployed inside a counsel-directed framework, has a stronger case but not a settled one.
Tricia Kinney, General Counsel at BlueLinx, frames the working principle on the CZ and Friends podcast:
"I am a huge fan of using legal specific AI tools as opposed to consumer specific AI tools. You want them training in the same context that we're operating in."
What survives Heppner is a narrow lane: a Business or Enterprise workspace with Zero Data Retention, deployed under counsel direction, used only for work where privilege is not the live question. Everything outside that lane is exposure.
Counsel-directed, SOC 2 Type II, with ZDR on the model layer. That is the deployment shape Rakoff signaled would matter. Start a 14-day free trial of GC AI or book a 15-minute demo.
For the full Rakoff three-part test, the Kovel doctrine carve-out, and what counsel-directed deployment requires, see the Heppner ruling and what it means for legal AI. For the wider privilege landscape across Mata, Park, and the post-Heppner adoption guidance, see ChatGPT for Lawyers.
What In-House Teams Need from a Legal AI Platform
In-house counsel need four privacy guarantees from any legal AI platform: contractual confidentiality with the underlying model providers, no training on customer data, SOC 2 Type II certification renewed annually, and a deployment that lets counsel direct the use. That maps to four practical questions for any platform on your shortlist.
Does the vendor have a written zero data retention agreement with the LLM providers it uses?
Is the vendor SOC 2 Type II certified, with the certification renewed annually?
Is customer data segregated, encrypted at rest and in transit, and isolated from other tenants?
Is the platform deployed in a way that lets counsel exercise the directing role privilege law expects?
GC AI is SOC 2 Type II and SOC 3 certified, GDPR compliant, with zero data retention agreements with OpenAI and Anthropic, and AES-256 encryption. Each customer's data sits in a logically isolated database. Your security and IT teams can verify the certifications at trust.gc.ai.
This posture is by design. Cecilia Ziniti, GC AI's CEO and co-founder, was a general counsel three times (Anki, Bloomtech, and Replit), and an in-house counsel at Amazon and Cruise. She built GC AI to solve the privacy and workflow problems she lived with as an in-house lawyer, problems that consumer chat platforms were never designed to handle.
The platform is built for legal teams to deploy under counsel direction, which is the operational shape Judge Rakoff signaled would matter when AI use comes up in litigation. GC AI also delivers 21% greater accuracy than generic AI tools like ChatGPT, according to GC AI's December 2025 ROI study of more than 100 active customers. Privacy and accuracy live inside the same platform.
The privacy posture pairs with features we built specifically for in-house legal work. We built Playbooks to encode your contract review standards into repeatable workflows, so a team of five lawyers reviewing the same NDA returns the same redlines. We built Exact Quote to verify citations at the character level against your own uploaded documents, so every claim traces back to where it came from.
Trisha Mauer, VP of Legal at Tonal, ran the comparison directly:
"I go straight to GC AI for everything from research requests to litigation responses. I've compared against ChatGPT, GC AI gives more comprehensive responses appropriate for a lawyer to use. After six months of use, I'm sure I've saved hundreds of hours."
More than 6,000 lawyers have taken GC AI's free CLE-certified prompting classes, taught by former general counsels. Use the classes as your team's structured ramp before rolling the platform out.
See how GC AI handles the same prompts under SOC 2 Type II and zero data retention with OpenAI and Anthropic. Try GC AI free for 14 days, no credit card.
ChatGPT vs GC AI: Privacy at a Glance
Strip away the certifications and pricing details in the plan-by-plan table above, and four structural commitments separate GC AI from any ChatGPT tier: SOC 3 certification on top of SOC 2 Type II, a logically isolated database for every customer, a counsel-directed deployment posture by design, and the legal-specific workflow layer (Playbooks, Exact Quote, the legal system prompt) that consumer and business AI platforms do not deliver.
ChatGPT Business and Enterprise close most of the privacy gap versus the consumer tiers. Neither tier raises the bar to the legal-specific workflows or privilege-aware deployment in-house teams need. GC AI does. For the full feature-by-feature view, see the GC AI vs ChatGPT comparison.
"Out of all the AI tools I've used, GC AI delivers the most impact for attorneys, confidential, reliable, and efficient." —Joys Choi, Senior Director, Legal at Tipalti
Choi has logged 609 hours saved, the equivalent of 76 full working days, since adopting GC AI.
Estimate your team's potential time savings with the ROI calculator.
Start With One Privacy-Sensitive Workflow
Pick one workflow your team has been holding back from AI because of privacy concerns. A first-pass review of an MSA. A gut-check on a regulatory question. A draft response to a vendor's redlined NDA. Run it on GC AI, where the privacy commitments are contractual, the deployment is counsel-directed, and the output is calibrated for legal voice. The 14-day trial costs nothing and tells you in days what a procurement review would tell you in months.
Settings reduce data footprint. Contracts create trust. See whether "private enough" and "trustworthy enough for legal work" finally line up on GC AI.
Your business moves fast. So should you. Join the 1,500+ in-house legal teams using GC AI.
Frequently Asked Questions
Is ChatGPT Private?
ChatGPT is not private by default. The Free, Go, Plus, and Pro consumer tiers train on user conversations unless the user opts out, and OpenAI retains conversations on its servers for up to 30 days under its abuse-monitoring policy. Business and Enterprise workspaces do not train on customer data by default and offer Zero Data Retention on qualifying plans. Neither tier is built around the privilege bar in-house legal work demands.
Is ChatGPT Plus Private?
ChatGPT Plus shares the same privacy defaults as Free, Go, and Pro: training on by default, no Zero Data Retention option, 30-day server retention. Paying $20 a month does not buy more privacy. Detail in the Plus section above.
Does ChatGPT Temporary Chat Stay Private?
Temporary Chat does not save to your history, does not update memory, and (per OpenAI policy) does not train the model. OpenAI still retains Temporary Chat conversations on its servers for up to 30 days for abuse monitoring before deletion. The conversation is accessible to OpenAI under its standard policies and producible under valid legal process during that window. Temporary Chat is useful for quick prompts but does not create a confidential channel.
If You Pay for ChatGPT, Is It Private?
Paying for ChatGPT improves your privacy posture only when you cross from a consumer tier to a Business or Enterprise workspace. Plus and Pro inherit the Free defaults, including default training on user conversations. Business at $20 per seat per month annual ($25 monthly, after OpenAI's April 2, 2026 price reduction) and Enterprise on a custom contract exclude your data from training and offer Zero Data Retention on qualifying terms.
Is There a Private Version of ChatGPT?
The closest equivalent to a "private version of ChatGPT" inside the OpenAI product line is ChatGPT Enterprise or a qualifying Business workspace. Both exclude your data from training, offer Zero Data Retention, and provide admin controls over retention, sharing, and exports. For confidential legal work, an Enterprise workspace is a higher bar than a consumer tier but is not engineered for legal workflows or privilege-aware deployment. Legal AI platforms like GC AI add the legal-specific layer on top of equivalent enterprise security primitives.
Is ChatGPT a Private Company?
OpenAI is a privately held company headquartered in San Francisco. ChatGPT's privacy defaults depend on the plan you are on. OpenAI's corporate structure does not change them.
How Do I Make Sure My ChatGPT Is Private?
Toggle off "Improve the model for everyone" in Data Controls, use Temporary Chat for one-off prompts you do not want in history, delete conversations you no longer need, revoke shared links you no longer want public, and move team-shared work into a Business or Enterprise workspace. None of these settings changes the privilege analysis for legal work. They reduce the data footprint without making consumer ChatGPT confidential.
Does ChatGPT Keep My Data After I Delete It?
Yes, for up to 30 days. Deleted conversations and Temporary Chats remain on OpenAI's servers for that window under the abuse-monitoring policy, then are removed from production systems. The 30-day window applies across every consumer plan and to Business and Enterprise plans except where Zero Data Retention has been negotiated.
Is ChatGPT Safe for Legal Work?
ChatGPT is the world's best general-purpose AI platform, and for general work it is excellent. For legal work that requires privilege and contractual confidentiality, consumer ChatGPT does not clear the bar Judge Rakoff laid out in United States v. Heppner. ChatGPT Business and Enterprise raise the privacy bar but are not built for legal workflows or counsel-directed deployment. A legal AI platform engineered for in-house counsel is the platform built for that work. 1,500+ legal teams use GC AI, including 80+ public companies. Start your 14-day free trial of GC AI.



