Skip to main content
Updated 3/23/26
Summary: A recent ruling from a New York federal court, the Heppner case, addressed attorney client privileged and AI tools. Heppner was a criminal case in which the court found that privilege did not attach to the communications in question there because they were made by a non-attorney to a consumer-grade AI tool that did not preserve confidentiality. GC AI, in contrast, is an enterprise tool designed for use under the direction of an attorney and maintains confidentiality, and therefore preserves attorney-client privilege for information inputted to and output from the system.
Lawyers are adopting AI at scale. This shift raises the question of how inputting confidential client information into an AI tool affects attorney-client privilege. In February 2026, a federal court addressed this question for the first time in United States v. Heppner (S.D.N.Y. Feb. 17, 2026, hereafter, "Heppner").
Although Heppner arose in a narrow context – a non-lawyer using a consumer-grade AI tool in a criminal case – the court's reasoning carries implications for practicing attorneys. The court’s analysis turned on the same questions that govern any privilege inquiry: who is communicating, whether confidentiality was maintained, and whether counsel directed the activity. Heppner offers a framework for considering how AI use intersects with an attorney’s ethical obligations of confidentiality and the protections afforded by privilege.
The Legal Framework For Lawyers' Handling of Confidential Information Generally
Three distinct obligations apply to a lawyer's handling of client information in the course of the lawyer's practice, including when using technology: (1) attorney-client privilege, (2) the work product doctrine, and (3) the ethical duty of confidentiality.
Attorney-client privilege protects from disclosure to an adversary confidential communications to, from, or at the direction of an attorney for the purpose of obtaining legal advice. The protection afforded by privilege is powerful; however, it can be waived when privileged communications are voluntarily shared with a third party. Any AI tool, like any technology that processes communications, is a third party for privilege purposes: the provider is neither the client nor the lawyer. Privilege forms part of the common law and is also codified in federal and state legislation (see Upjohn Co. v. United States; Fed. R. Evid. 501; Fed. R. Evid. 502).
The work product doctrine protects materials prepared in anticipation of litigation. Work product is more durable than privilege: it is generally waived only by disclosure to an adversary or in a manner that substantially increases the risk of adversary access (see Hickman v. Taylor; Fed. R. Civ. Pro. 26(b)(3)).
The ethical duty of confidentiality is the broadest of the three. It covers all information relating to a lawyer's representation of a client and requires lawyers to make "reasonable efforts" to prevent unauthorized disclosure (see ABA Model Rule 1.6). In practice, this means evaluating any technology tool's data handling policies, terms of service, and security controls before inputting any client information into that tool.
What Bar Associations Have Said About AI and Confidentiality
Bar association guidance addressing the use of AI has focused primarily on the duty of confidentiality. The American Bar Association (ABA) in ABA Formal Opinion 512 (July 2024) addressed the use of AI and provided that lawyers must evaluate confidentiality risks before inputting client information into any AI tool, and review terms of use and data handling policies. Similarly, state bars in Florida, California, New York, Texas, D.C., Pennsylvania, and others have issued consistent guidance regarding the duty of confidentiality and the use of AI: lawyers must not input confidential information into AI tools lacking adequate confidentiality protections.
That guidance reflects a practical reality: not all AI tools offer the same confidentiality protections. Consumer-grade AI platforms (i.e., publicly available products like ChatGPT and Claude in their free or standard consumer tiers) generally lack the confidentiality safeguards found in corporate-grade and legal-specific AI platforms, which often provide more robust contractual and technical data-handling commitments. Further, consumer-grade AI tools often use user data for model training, requiring individual users to opt out through their settings, if that option is available.

Source: ChatGPT Settings (from a consumer account), March 17, 2026
Indeed, ABA Formal Opinion 512 recognized that legal-specific tools warrant different treatment: "a lawyer's use of a [generative AI] tool designed specifically for the practice of law or to perform a discrete legal task, such as generating ideas, may require less independent verification or review, particularly where a lawyer's prior experience with the [generative AI] tool provides a reasonable basis for relying on its results."
The Heppner Ruling: What the Court Decided
In United States v. Heppner (S.D.N.Y. Feb. 17, 2026), Judge Jed Rakoff issued the first federal ruling squarely addressing AI and attorney-client privilege. In this case, a non-lawyer criminal defendant used the consumer version of Anthropic’s Claude to generate multiple documents, which the defendant later shared with his defense counsel.
The court considered the question: “whether, when a user communicates with a publicly available AI platform in connection with a pending criminal investigation, are the AI user’s communications protected by attorney-client privilege or the work product doctrine?”
The court concluded that the defendant's communications with Claude were protected by neither attorney-client privilege nor the work product doctrine.
The court found the three required elements of privilege unsatisfied:
The communications with Claude were not between a client and their attorney. The court held that Claude is not an attorney and that all recognized privileges in the attorney-client context require a "trusting human relationship" with a licensed professional who owes fiduciary duties and is subject to discipline. The court found that “[n]o such relationship exists, or could exist, between an AI user and a platform such as Claude”.
The communications were not confidential. The court reviewed Anthropic’s consumer privacy policy (as of February 19, 2025), which expressly permitted data collection on user inputs and AI outputs, use of that data for model training, and disclosure to third parties including governmental authorities. The court concluded that the defendant could have no “reasonable expectation of confidentiality in his communications” with Claude.
The communications were not made for the purpose of obtaining legal advice. In rejecting the defendant’s argument that he communicated with Claude for the “purpose of talking to counsel”, the court found that the defendant communicated with Claude on his own initiative, not at the suggestion or direction of counsel. What mattered was “whether Heppner intended to obtain legal advice from Claude, not whether he later shared Claude’s outputs with counsel”.
The court separately rejected the defendant’s work product claim, because the materials created by the defendant’s communications with Claude “were not prepared at the behest of counsel and did not disclose counsel’s strategy” at the time the defendant (again, not the lawyer) created them.
Critically, the court in Heppner left open that the result might differ where counsel directs a client to use an AI tool. In that scenario, the AI "might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer's agent within the protection of the attorney-client privilege."
While Heppner arose in a narrow factual context – a criminal case with use by a non-lawyer, the court's reasoning extends. The analysis turns on the same questions courts always ask as to privilege: who is communicating, whether confidentiality was maintained, and whether counsel directed the activity. Those questions apply with equal force to in-house teams evaluating enterprise AI platforms.
How GC AI Preserves Privilege Under Heppner
GC AI is built for legal professionals and for use by and under the direction of attorneys. The platform operates under enterprise-grade security with contractual confidentiality protections designed to preserve attorney-client privilege. The court's analysis in Heppner supports that, when used as designed, GC AI fits the counsel-directed framework the court distinguished from the non-lawyer defendant's unguided interactions. GC AI’s design and intended use map to elements the court found missing in Heppner.
1. Agency. The court in Heppner cited United States v. Kovel, 296 F.2d 918 (2d Cir. 1961) in observing that the AI “might arguably be said to have functioned in a manner akin to a highly trained professional who may act as a lawyer’s agent within the protection of the attorney-client privilege” had counsel directed the non-lawyer defendant to use the consumer-grade generative AI tool. The Kovel doctrine extends attorney-client privilege to non-lawyer agents retained by counsel to assist in providing legal advice.
GC AI is a tool retained by attorneys, used under attorney supervision, and bound by contractual confidentiality obligations – the three elements courts look for when applying Kovel to third-party service providers.
2. Purpose of Legal Advice. The Heppner court held that the defendant’s communications with Claude were not made for the purpose of obtaining legal advice because he “communicated with Claude of his own volition” rather than “at the suggestion or direction of counsel”.
When an attorney or their express delegate uses GC AI to research a legal question, draft a contract, or analyze a regulatory issue, they are using the tool to develop and provide legal advice, not seeking legal advice from the AI as though the tool were counsel.
3. Work Product. The court rejected the Heppner defendant’s work product argument because the AI-generated materials were “not prepared by or at the behest of counsel” and “did not reflect defense counsel’s strategy” at the time of creation.
Where an attorney uses GC AI to prepare materials in anticipation of litigation, those materials are prepared by counsel using a tool and reflect counsel’s legal strategy from the outset.
4. Confidentiality. The court’s confidentiality finding in Heppner was grounded in specific features of Anthropic’s consumer privacy policy (as of February 19, 2025): Anthropic collects data on both user "inputs" and AI "outputs"; Anthropic uses that data to "train" Claude; and Anthropic reserves the right to disclose data to "third parties," including "governmental regulatory authorities".
Unlike in Heppner, communications inputted to GC AI and made with GC AI are confidential. GC AI's Services Agreement takes the opposite approach. It establishes binding confidentiality obligations, prohibits the use of customer data for model training, and maintains zero data retention with model providers:
Customer data is Confidential Information. All non-public information shared with GC AI, including data, documents, and queries, qualifies as "Confidential Information." GC AI is contractually obligated not to disclose Confidential Information for any purpose other than performing the Services, and to "take all necessary and reasonable precautions to prevent the disclosure of Confidential Information to any unauthorized third parties." (Sections 5.1, 5.2)
No model training. GC AI will not use customer Content to train generative AI models and will not disclose User Data for any commercial purpose unrelated to providing the Services without written consent. GC AI maintains zero data retention with model providers (meaning no customer data is stored by the underlying AI model providers after processing a query). (Section 4.9)
Customer data stays owned by the customer and segregated. Customers retain all right, title, and interest in their User Data and Content. Customer Content is strictly segregated and is not accessible by other users outside of the customer's organization. (Sections 4.2, 4.4)
Compelled disclosure protections. If GC AI were ever legally compelled to disclose Confidential Information, GC AI must (unless prohibited by law) promptly notify the affected customer so that the customer can seek a protective order, provide reasonable assistance in obtaining that order, and disclose only the minimum portion legally required. (Section 5.3)
Technical security. Customer data is encrypted using AES-256 at rest and TLS 1.2+ in transit, logically isolated in private database instances. GC AI is SOC 2 Type II certified.
These protections also address waiver. In Heppner, the court wrote that "even if certain information that Heppner input into Claude was privileged, he waived the privilege by sharing that information with Claude and Anthropic, just as if he had shared it with any other third party." The court reasoned that “in light of Anthropic’s privacy policy… Heppner had no reasonable expectation that the inputs would not be shared with other third parties”.
In contrast with Heppner and with Anthropic's Claude obligations, GC AI's confidentiality obligations position it as a service provider analogous to other enterprise tools (such as Google Workspace, Slack, or Asana), where sharing information with the provider in the course of receiving services does not constitute third-party disclosure for purposes of waiver. See e.g. Ill. State Bar Ass'n, Advisory Op. No. 96-10, Use of E-Mail, Lawyer Websites (May 1997), affirmed (Jan. 2010) (concluding that lawyers' use of email did not waive attorney client privilege because mere transmission through third-party service providers does not compromise the reasonable expectation of privacy, because "[t]he Committee [did] not believe that the opportunity for illegal interception by personnel of an ISP [made] it unreasonable to expect privacy of the message").
Finally, GC AI’s design addresses the risk of cross-customer data exposure. While not directly addressed in Heppner, ABA Formal Opinion 512 warned that "self-learning" AI tools "by their very nature, raise the risk that information relating to one client's representation may be disclosed improperly" by surfacing in outputs generated for other users.
GC AI's no-training policy and data segregation measures, wherein each customer's content is accessible only within that customer's organization, address this risk. Information input by one customer cannot appear in another customer's outputs or change the model.
The Key Takeaway: Built for Legal Work
GC AI is built for use by legal professionals and structured to support privilege preservation when used at the direction of counsel. Unlike consumer AI tools, GC AI operates with contractual confidentiality protections, enterprise security controls, and data handling practices designed for legal work.
Privilege determinations remain fact-specific and depend on how a tool is used and by whom. This page reflects GC AI's understanding of current law and guidance and is provided for informational purposes only. GC AI does not provide legal advice. (You knew that, but we're lawyers too.)
FAQ: Common questions from in-house teams
1.
Can I preserve privilege when using GC AI?
Yes. The American Bar Association and state bars have advised that lawyers using generative AI tools should evaluate the provider’s security and confidentiality protections (see ABA Formal Opinion 512). GC AI is built to satisfy that standard and operates under enterprise-grade security with contractual confidentiality protections designed to preserve attorney-client privilege, including as articulated and applied to AI tools in Heppner.
Once a customer verifies GC AI’s security processes and the applicable terms, as more than 1000 companies have done, they can enter information as they would with any other trusted cloud-based tools, such as Google Workspace, Slack, or Asana. As discussed above, GC AI's terms treat all customer data as Confidential Information, prohibit the use of customer Content for model training, and maintain logical data segregation across organizations.
2.
What happens if GC AI receives a subpoena for customer data?
Pursuant to Section 5.3 of GC AI’s Services Agreement, if GC AI were ever legally compelled to disclose Confidential Information, GC AI must (unless prohibited by law) promptly notify the affected customer so that the customer can seek a protective order, provide reasonable assistance in obtaining that order, and disclose only the minimum portion legally required. If GC AI still must disclose the Confidential Information, it will only share the portion of Confidential Information that is legally necessary and will use commercially reasonable efforts to obtain assurances that it remains confidential.
3.
Can paralegals or others on the team use GC AI?
The Services Agreement permits registered users within a Customer's organization to access GC AI. Paralegals and other legal team members working at the direction of counsel can be assigned seats and use the platform. The Customer is responsible for ensuring the authorized use of GC AI by its users. (Section 2.1)
4.
What happens if procurement or other teams want to use GC AI?
The Services Agreement permits only registered users within a Customer's organization to access GC AI. Customers should consider how non-legal usage of an AI platform interacts with privilege. GC AI is designed to support the "at the direction of counsel" framework that courts and bar associations look for in the privilege analysis. Usage by non-legal personnel acting at the direction of and reporting to counsel may fit within that framework. Usage by non-legal teams operating independently of counsel (e.g., procurement using GC AI for its own contract review without attorney oversight or use of attorney provided playbooks) may not carry the same privilege protections. The Customer is responsible for ensuring the authorized use of GC AI by its users. (Sections 2.1)
5.
Who owns the data entered into GC AI?
The Customer does. Under Section 4.2 of the Services Agreement, Customers retain all right, title, and interest (including all intellectual property rights) in their User Data and Content. GC AI receives only a limited license to use that data for the purpose of providing the Services. GC AI will not use Customer Content to train AI models and will not disclose User Data for any unrelated commercial purpose without written consent. (Sections 4.1, 4.2, 4.9)
6.
If an employee leaves our company, who receives their GC AI queries?
When a user is deactivated or deleted from a Customer’s Active Directory, they’re automatically removed from the Customer’s organization and their seat is deactivated. The user’s data and content are preserved, so access can be restored if they’re re-added later. Customers can reach out to GC AI Support (support@gc.ai) when an employee leaves the company if there is a need to access data from a deactivated user.
Learn more at the GC AI Security FAQs and Trust Center.