Across in-house legal teams, ChatGPT is already in the building. Lawyers on your team are using Plus to redline boilerplate. Engineering pasted a function into Free last week. Sales ran a customer's signed term sheet through it for a quick summary. The question, if you are general counsel, is whether any of that creates a problem under the contracts your business already signed. Is ChatGPT confidential enough for any of it?
The answer turns on which definition of "confidential" you mean. ChatGPT can be private in the consumer sense, with the right toggles flipped, and we walked through that question in Is ChatGPT Private? What Lawyers Can Trust It With. No consumer ChatGPT plan satisfies the contract-confidentiality bar your counterparties expect. An NDA's confidentiality clause is a written promise from a counterparty that handles the information in a specific way for a specific purpose, with named permitted disclosees and a return-or-destroy obligation at the end. Your business partners signed NDAs. ChatGPT did not.
We are GC AI, the legal AI platform 1,600+ in-house teams (including 80+ public companies and 25 unicorns) use to keep confidential work inside the contractual fence their NDAs require. The recurring question we hear from incoming customers reads like this:
"Did our employees just breach an NDA when they pasted that into ChatGPT, and what should our policy say going forward?"
Below: how a typical NDA defines confidentiality, what each ChatGPT plan covers (and does not) on the contract layer, the six paste patterns that trigger the breach risk, the policy primitives counsel can ship Tuesday morning, and the twelve due-diligence questions to put in front of any AI vendor.
See the contract layer in 20 minutes with a former in-house lawyer, or start free and run one confidential workflow on a platform built for the NDA stack your business is operating under.
What "Confidential" Means in an NDA
Pull any mutual NDA off your desk. The defined term "Confidential Information" usually reads like this:
"Confidential Information" means any non-public information disclosed by one party to the other, whether disclosed in writing, orally, or by inspection, that is marked confidential, identified as confidential at the time of disclosure, or that a reasonable person would understand to be confidential under the circumstances.
The operative section then says the receiving party will:
Hold the Confidential Information in strict confidence.
Use the Confidential Information only for the Permitted Purpose.
Limit access to the Confidential Information to its employees, advisors, and representatives who have a need to know and who are bound by confidentiality obligations at least as protective as those in this Agreement.
Return or destroy the Confidential Information at the end of the engagement, upon written request.
That is the bar. When your CFO sends you an LOI marked confidential and you paste a clause into ChatGPT for a quick redline, the analysis is simple. OpenAI is not your employee. OpenAI is not your advisor. OpenAI is not bound by the NDA you signed. Whether OpenAI counts as a permitted disclosee depends on your NDA's exact language and on whatever subprocessor or service-provider exception your contract carved out.
For a meaningful number of NDAs in the real world, the answer is that pasting confidential information into a third-party AI service creates exposure under typical NDA confidentiality clauses, regardless of whether the service is technically "private."
A platform can be encrypted, never train on your data, retain nothing, and still put you sideways with the contract you signed two months ago. The technical posture matters. The contract posture matters more.
ChatGPT Plans, Side by Side, on the Contract Layer
Plan comparisons in this category cover training defaults and retention windows. Those are privacy questions. The contract-layer view reads differently. Data below was updated as of May 2026.
Plan | Negotiable DPA | ZDR With LLM Providers | Audit Logs | Indemnification for Confidentiality Breach | Clears Typical NDA Bar? |
|---|---|---|---|---|---|
Free, $0 | No | No | No | No | No |
Go, $8 | No | No | No | No | No |
Plus, $20 | No | No | No | No | No |
Pro, $100 or $200 | No | No | No | No | No |
Business, $20 or $25 per seat | Limited template | Available on qualifying plans | Yes | Limited | Sometimes |
Enterprise, custom | Negotiated per customer | Available | Yes | Negotiated | Often, depends on stack |
API, pay-as-you-go | Customer's own terms | Available with ZDR | Customer-built | Limited | Depends on application |
GC AI, custom | Negotiated, signed | Yes, with OpenAI and Anthropic | Yes | Yes | Yes |
Free, Go, Plus, Pro: No Contract to Stand On
The four consumer tiers share the same contract layer, which is to say, none. There is no DPA you negotiate, no service-provider carve-out language, no indemnification for confidentiality breaches, and no contractual audit right if a counterparty asks how their information was handled. The compute envelope changes between Free, Go, Plus, and Pro. The contract envelope around your data does not.
Plus and Pro are great consumer products. They are also a great way to send a CEO's draft term sheet to a model that does not work for you and is not bound by the NDA your business signed. The hourly rate of the lawyer doing the paste is irrelevant to that analysis.
Business, $20 or $25 per Seat: Template DPA Tier
ChatGPT Business adds a workspace, admin controls, and a contractual commitment that customer content is not used to train models by default. The price dropped to $20 per seat annual on April 2, 2026. Monthly stayed at $25.
Business gives you something to point your security team at, which Plus does not. Whether what you point them at clears your specific NDA stack depends on three things: the DPA terms ChatGPT Business offers your company (typically a template, sometimes negotiable for larger workspaces), the carve-outs in the underlying NDAs you are working under, and your in-house policy on third-party AI subprocessors. Business is a meaningful upgrade for general business use. It is a partial answer for confidential legal work.
Enterprise, Custom: A Real Contract, Still General-Purpose
Enterprise is where the contract layer becomes real. Negotiated DPAs, encryption at rest and in transit, SSO, audit logs, defined retention, and indemnification language your security and legal teams can review. Some NDA stacks accept a properly papered ChatGPT Enterprise deployment. Others demand more, particularly NDAs with strict no-third-party-disclosure language, NDAs that pre-list permitted subprocessors, or industry-specific NDAs in finance, life sciences, or government work.
Enterprise also remains a general-purpose AI platform. Confidential legal work has workflow and citation requirements (Exact Quote, Playbooks, Word integration) that sit outside what ChatGPT Enterprise was built for.
API, Pay-as-You-Go: Whatever You Build
The API is a building block. Default 30-day retention for abuse monitoring, ZDR available for qualifying organizations. Whether an application built on the API clears your NDA bar depends on the application, the wrapper contract, and whether the integrating company has wrapped the API in the contract layer your business needs. The API itself does not have a confidentiality posture. The application built on it does, and that is what your contract review sees.
The Six Categories Employees Paste That Trigger Breach Risk
Across the in-house teams we work with, six categories of paste account for almost the entire confidentiality exposure surface. Each one maps to a contract clause that lights up the moment the paste happens.
Source code and engineering specifications. Engineering pastes a function or a sequence into ChatGPT to debug it. Hits: employee NDA, IP assignment agreement, vendor confidentiality clauses, and (for regulated industries) export-control language.
Financial projections, board materials, and M&A target information. Strategy or finance pastes a model, a board deck excerpt, or a target name. Hits: counterparty NDAs, securities-law restrictions on material non-public information, and board confidentiality obligations.
Compensation data, performance reviews, and employment matters. HR pastes a comp grid, a PIP narrative, or an investigation summary. Hits: employment agreements, state pay-equity confidentiality statutes, and ADA-protected information rules.
Customer data and PII. Sales or customer success pastes an account note, a support ticket, or a list of users. Hits: customer DPAs, the company's privacy policy, GDPR or CCPA processing terms, and any contractual data-residency commitments.
Counterparty paper marked confidential. Legal pastes an executed NDA, an LOI, an MSA redline, or a counterparty's term sheet. Hits: the counterparty's confidentiality clause, the confidentiality-of-negotiations carve-out, and any "no use except for purpose" clause in the underlying agreement.
Trade secrets and protected know-how. Anyone pastes a proprietary formula, a process document, or a workflow recipe. Hits: state Uniform Trade Secrets Act protections, the federal Defend Trade Secrets Act, and the company's trade-secret protection program (which usually requires reasonable efforts to maintain secrecy as an element).
The pattern that connects the six is the same. The information stops being confidential, in the legal sense, the moment it is disclosed to a third party that is not contractually bound to handle it that way. Privacy controls do not change that. Contract language does. GC AI exists to give all six categories one contract-bound destination, so the legal team can route each one without thinking.
The Samsung Playbook: Three Pastes, Twenty Days, One Ban
Samsung is the case study in-house teams should study. The shape of what happened is more useful than the headline.
In March 2023, Samsung permitted ChatGPT for engineering use. Within twenty days, three confidential disclosures had been recorded.
Day 1. A semiconductor engineer pasted source code from a faulty chip-fabrication program into ChatGPT, asking the model to identify the bug. The chat returned a fix.
Day 8. A different engineer pasted source code from a confidential equipment program into ChatGPT to optimize a sequence.
Day 20. A third employee pasted a transcript of an internal meeting into ChatGPT and asked for a summary.
On May 1, 2023, Samsung announced an enterprise-wide ban on generative AI on company devices and corporate networks.
The platform was running as designed in all three pastes. The contract fence the company had built around its IP, on the other hand, had a Chrome tab in it. Samsung's takeaway was about scope. The takeaway for in-house teams reading the case is the same. Policy that says "use approved AI for confidential work" is policy. Policy that routes confidential work through the legal AI platform automatically, so no engineer ever has to make the routing decision at 11 PM, is operations.
The Lawyer-Specific Layer: Privilege, the ABA, and Heppner
A consumer privacy posture is one question. Attorney-client privilege is a different one, and it is the one a federal court resolved against the defendant in United States v. Heppner, the first-impression Southern District of New York ruling on AI privilege. The court held that AI chat exchanges with a consumer-grade general AI did not preserve privilege, because the platform did not preserve confidentiality in the way the doctrine requires.
The full Rakoff three-part test sits in the linked deep-dive. The point for in-house counsel: privilege depends on confidentiality, and confidentiality depends on what the platform contractually promised about your data. A consumer ChatGPT plan did not promise enough to clear that bar in front of one federal judge in February 2026, and your opposing counsel has read that opinion too.
The American Bar Association has separately published Formal Opinion 512 on lawyers' use of generative AI, which sets the professional-responsibility frame for competence, confidentiality, communication with clients, and supervision when an AI platform is part of the workflow. For the practitioner-facing read on Mata, Park, Wadsworth, and the four rules counsel should follow before prompting, see ChatGPT for Lawyers in 2026.
What Counsel Should Put in the AI Use Policy
A workable AI use policy is short, routable, and ownership-clear. The version we see working in large in-house functions and in three-person teams shares the same primitives. Use these as a paste-able starting point and adjust to your contract stack.
Tiered tool use. Public-record research, regulatory tracking, and personal learning may use general-purpose AI on the consumer plan provided through the company. Counterparty paper, executed agreements, and material that contains another party's confidential information must be processed only on the legal AI platform provisioned by the legal team.
Hard-stop categories. Customer PII, employee compensation, protected health information, and source code marked confidential must not be entered into general-purpose AI under any tier.
MNPI lane. M&A, financing, board, and material non-public information must be processed only on the legal AI platform, with logging enabled, and only by users on the deal team.
Vendor approval. Before anyone uses a new AI tool for company work, legal will run the vendor due diligence checklist (below) and security will sign off on the SOC 2 report.
Training requirement. Each employee will complete the company's annual AI use training before being granted access to the legal AI platform.
Ownership. Legal owns this policy. Security owns the deployment. People owns the training. The CEO is the executive sponsor.
Incident reporting. Suspected disclosure of confidential information through any AI service must be reported to legal within 24 hours.
Cadence. This policy is reviewed quarterly. Updates ship through the standard policy update channel.
That is the eight-line version. Teams typically add a paragraph on third-party data handling, a paragraph on retention defaults, and a paragraph on what happens when an employee finds a useful AI tool not on the approved list. For the training requirement in line 5, GC AI's free CLE-eligible classes are a starting point legal teams can drop in directly. The structure stays the same.
If line 1 of the policy routes confidential work to "the legal AI platform provisioned by the legal team," GC AI is the platform 1,600+ in-house teams provisioned for it. Start free or book a 20-minute setup walkthrough.
Vendor Due Diligence: Twelve Questions Before Procurement Signs
The questions counsel should be able to answer in writing about any AI vendor that touches confidential business information. Send these to the vendor before the security review starts. The answers either come back fast and clean, or they tell you something.
Does the vendor have a written DPA your company has reviewed and signed?
Does the vendor have zero data retention agreements with the underlying LLM providers? Which providers, in writing?
Does the vendor train models on customer content? On any tier?
Does the vendor maintain SOC 2 Type II certification, renewed annually, and is the report available?
Does the vendor maintain SOC 3 certification or an equivalent public attestation?
Does the vendor encrypt customer content at rest and in transit, and what cipher?
Does the vendor offer logical isolation per customer (single-tenant or per-tenant database segregation)?
What sub-processors does the vendor use, and what is the notice obligation if the list changes?
What is the breach-notification SLA in the master agreement?
Does the vendor offer indemnification for confidentiality breaches caused by the platform's data handling?
Does the vendor provide audit logs the customer can export and retain?
Where is customer data stored geographically, and does that match your data-residency commitments to your own customers?
Procurement will ask twenty more. These twelve are the legal-facing minimum.
What an Approved Stack Looks Like for In-House Legal
The teams we work with day to day are not banning AI. They are building one approved-stack workflow per task type and routing everyone through it.
Public research, regulatory tracking, and personal learning go through general-purpose AI. The terms of those tools fit that work. Anything that touches a counterparty document, an executed agreement, an MSA, an NDA, an LOI, a board consent, an employment matter, or a regulatory submission goes through a legal AI platform with a signed DPA, no training on customer content, and a confidentiality posture that fits the underlying NDA stack.
"If it sees a missing confidentiality clause, I'll just ask it to draft one I can drop right into the agreement, no leaving the system, no reformatting."
Tiffany Lee, General Counsel and Corporate Secretary at Liquid Death
The reason that workflow holds together is that the platform sits inside the contractual fence the legal team has already built around the company's confidential work. The lawyer never has to switch tabs to a consumer chat product to get the speed.
"The night before, I worked through the whole strategy with GC AI, thinking through what would be sensitive, what ranges to hold, what counterarguments to expect. It gave me a plan and the confidence to lead discussions.”
Cameron Clark, Head of Legal at Arc'teryx
A practice change of that scale is durable only when the platform underneath it is approved for the full surface area of the work, including the confidential surface area.
"If you only have a budget for one tool, choose the one fine-tuned for in-house legal.”
Alexandra Sepulveda, Assistant General Counsel at Trust & Will
How GC AI's Contract Layer Reads Differently
The four contract-layer questions for any AI vendor: what does the DPA say, what is the data retention default, who trains on the inputs, and what does the platform commit to in writing.
GC AI is SOC 2 Type II and SOC 3 certified, GDPR compliant, with zero data retention agreements with OpenAI and Anthropic, and AES-256 encryption.
In practice, that means three contract-layer commitments. First, a DPA negotiated and signed per customer. Second, zero retention with the underlying LLM providers (OpenAI and Anthropic) under written agreement. Third, indemnification language and breach-notification SLAs that match what an in-house security team would draft if writing the contract from scratch.
The features that sit on top -- Exact Quote for character-level citation, Playbooks for repeatable contract review workflows, the Word add-in so the redline never leaves the document -- are downstream of that contract posture. They become useful once the contract layer is settled.
For the published security architecture, see the trust center. For the plain-language version, see the security page. For the side-by-side comparison, see GC AI vs ChatGPT.
Audit the gap in 20 minutes with a former in-house lawyer, or start free and run one confidential workflow on a platform built for the contract layer.
So When Can Lawyers Use ChatGPT, Concretely?
A short field guide.
Use ChatGPT for: public-record legal research, regulatory tracking, plain-language explanation of a doctrine for your own learning, drafting non-confidential business communications, brainstorming a new training program for sales, or any task where the inputs would be safe to publish on a public blog.
Avoid ChatGPT for: counterparty paper, executed agreements, board materials, deal documents, employment matters, regulatory submissions with confidential business information, or anything marked confidential under an NDA your team is operating inside.
Move to a legal AI platform with a signed DPA for: contract review and redlining, market-standard analysis on third-party paper, repeatable playbook workflows, anything that gets escalated to the CEO or board, and anything that might be reviewed by opposing counsel later.
The line is about matching the contractual posture of the platform to the contractual posture of the work.
The CEO-Facing Version, in Three Sentences
If your CEO asks you whether the team can use ChatGPT, the answer is yes for public-record research and learning, no for anything marked confidential under your NDA stack, and the company should buy a legal-grade AI platform with a signed DPA for everything in between.
Build the policy that routes each task to the right tier. Train the team on it. The exposure lives in the gap between the policy and the Tuesday afternoon term sheet.
See What Confidential-Grade Legal AI Looks Like
Run research, redline contracts, and brief your CEO on a platform built for the confidentiality regime your business runs on.
Your business moves fast. So should you. Join the 1,600+ in-house legal teams using GC AI.
Frequently Asked Questions
Is ChatGPT Confidential Enough for Privileged Legal Work?
Consumer ChatGPT plans (Free, Go, Plus, Pro) do not provide the contractual confidentiality posture that attorney-client privilege depends on. A federal court in United States v. Heppner ruled in February 2026 that AI exchanges through a consumer-grade general AI did not preserve privilege. ChatGPT Business and Enterprise offer stronger contractual terms, and whether they clear the privilege bar in your jurisdiction depends on the negotiated DPA, the underlying engagement, and your state bar's guidance.
Can Pasting a Confidential Document Into ChatGPT Breach an NDA?
It can. A typical NDA requires the receiving party to limit disclosure to employees and advisors with a need to know, bound by confidentiality obligations at least as protective as the NDA itself. Pasting confidential information into a third-party AI service that did not sign the NDA may breach the confidentiality obligation depending on the contract's exact language, the carve-outs for service providers and subprocessors, and the AI vendor's commercial terms. The analysis is contract-specific.
Is ChatGPT Plus Confidential?
ChatGPT Plus is a consumer subscription. There is no negotiated DPA, no Business Associate Agreement, no service-provider carve-out language for use under your existing NDAs, and no indemnification if a counterparty later objects to how its information was handled. Plus is appropriate for non-confidential research and personal use. It does not clear the contract-confidentiality bar a typical NDA imposes.
Is ChatGPT Enterprise Confidential?
ChatGPT Enterprise is the tier built for the question in-house lawyers are asking. It comes with negotiated terms, no model training on customer content by default, encryption at rest and in transit, SSO, audit logs, and contractual data handling commitments. Whether it fits a specific NDA stack depends on the negotiated DPA, the named subprocessors, the defined retention, and the carve-outs in the underlying NDAs. Some stacks accept it. Others want a legal-specific platform with workflow and citation features Enterprise does not include.
Is There a Confidential Version of ChatGPT?
ChatGPT Enterprise (and ChatGPT Business under qualifying terms) is the closest OpenAI offers to a confidential-grade tier. Both exclude customer content from training and offer Zero Data Retention on qualifying terms. For confidential legal work, a purpose-built legal AI platform with a signed DPA, zero data retention with the underlying LLM providers, and SOC 2 Type II plus SOC 3 certification (such as GC AI) sits more cleanly inside the confidentiality regime of standard in-house engagements.
Is Information Entered Into ChatGPT Confidential?
Information entered into ChatGPT is not confidential in the contract sense unless the tier you are on includes contractual confidentiality commitments and the work fits the carve-outs in your underlying NDAs. Free, Go, Plus, and Pro are consumer products with consumer terms. Business and Enterprise tighten the contract layer. Confidential business information should be routed to a legal AI platform with a signed DPA.
How Confidential Is ChatGPT for Lawyers?
Consumer ChatGPT plans do not provide the contractual confidentiality posture in-house legal work requires. ChatGPT Business and Enterprise tighten the posture meaningfully, and whether they fit a specific in-house workload depends on the negotiated DPA and the underlying NDA stack. For privileged work, a purpose-built legal AI platform with a signed DPA, zero data retention, SOC 2 Type II plus SOC 3 certification, and counsel-directed deployment is the cleaner fit.
Is ChatGPT 4 Confidential? Does the Model Version Matter?
Model version (ChatGPT 4, ChatGPT 4o, ChatGPT 5) does not change the confidentiality analysis. What changes the analysis is the tier and the contract: which plan you are on, what DPA you signed, what carve-outs your NDAs make for subprocessors. A more capable model running on Plus is still bound by Plus's contract terms. A weaker model running on Enterprise is still inside Enterprise's contract layer.
Is the Paid Version of ChatGPT Confidential?
Paying for ChatGPT improves the confidentiality posture only when the payment moves you from a consumer tier (Plus, Pro) to a business tier (Business, Enterprise). Plus and Pro are consumer subscriptions with consumer terms. Business adds a template DPA. Enterprise adds a negotiated DPA. The price is not the variable that matters. The contract layer is.
What Should I Tell My Engineers About Pasting Code Into ChatGPT?
Source code marked confidential, code that implements a trade secret, or code subject to an open-source license review should not be pasted into a consumer ChatGPT tier. The exposure lives at the intersection of the employee NDA, the IP assignment agreement, the company's trade-secret protection program, and any third-party license obligations. Engineering teams that need AI in the IDE should be routed to a tool the security team has approved, with a DPA on file, and with logging enabled.



