ARTICLE
10 March 2026

AI And Legal Privilege: Lessons From The Heppner And Warner Rulings In The United States

DW
Davies Ward Phillips & Vineberg

Contributor

Davies is a law firm focused on high-stakes matters. Committed to achieving superior outcomes for our clients, we are consistently at the heart of their most complex deals and cases. With offices in Toronto, Montréal and New York, our capabilities extend seamlessly to every continent. Visit us at www.dwpv.com.
In United States v. Heppner (S.D.N.Y. February 17, 2026), Judge Jed S. Rakoff ruled that documents generated by a criminal defendant using Anthropic's consumer AI tool, Claude...
Canada Technology
Nikita Stepin’s articles from Davies Ward Phillips & Vineberg are most popular:
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in United States
  • with readers working within the Technology, Law Firm and Construction & Engineering industries

Two recent decisions from U.S. federal courts have produced strikingly different outcomes on whether materials generated using AI tools are protected by privilege or work-product doctrine. Both cases involved individuals or organizations relying on consumer AI platforms, rather than legal counsel, to develop litigation strategies or prepare legal materials. These rulings highlight an emerging and unsettled legal landscape in which organizations cannot assume AI interactions will be treated as confidential or privileged.

Background

Heppner: Use of AI Creates Risk to Privilege

In United States v. Heppner (S.D.N.Y. February 17, 2026), Judge Jed S. Rakoff ruled that documents generated by a criminal defendant using Anthropic’s consumer AI tool, Claude, were not protected by privilege or work-product doctrine. The defendant, a former CEO facing securities and wire fraud charges, had used the AI platform to organize information and develop defence strategies after receiving grand jury subpoenas. He created 31 AI-generated documents which were later shared with counsel. When government agents executed a search warrant and seized these documents, the court ordered their disclosure, holding that the “novelty” of AI “does not mean its use is not subject to longstanding legal principles.”

The court noted that attorney-client privilege attaches to communications (1) between a client and their attorney, (2) that are intended to be, and in fact are, kept confidential, and (3) made for the purpose of obtaining or providing legal advice. The court found that the AI documents in question lacked at least two, if not all three, of these elements.

In considering the element of confidentiality, Judge Rakoff reasoned that the AI documents were not confidential, not only because Heppner communicated with a third-party AI platform, but also due to the platform’s written privacy policy. Notably, Anthropic’s policy explicitly stated that the platform collects and uses such data for its own purposes and reserves the right to disclose such data to a host of “third parties,” including “governmental regulatory authorities.”

The Warner Decision: A Contrasting Approach

Around the same time as the Heppner  ruling, a Michigan federal court reached a conclusion in potential tension with Heppner on whether AI-generated materials are protected from discovery. In Warner v. Gilbarco, Inc., (E.D. Mich. February 10, 2026) a civil employment dispute, the defendants sought production of all documents concerning the plaintiff’s use of third-party AI tools (including ChatGPT) in her lawsuit. The court denied this request, emphasizing that the plaintiff, who was self-represented, was entitled to protection under the “work-product doctrine,” rendering the materials non-discoverable.

The court in Warner  reasoned that using AI tools to prepare legal materials is analogous to traditional work product–protected activities. Critically, the court rejected the argument that employing generative AI amounted to a waiver of work-product protection, stating that “ChatGPT (and other generative AI programs) are tools, not persons, even if they may have administrators somewhere in the background” (our emphasis). The court emphasized that waiver of work-product protection requires disclosure to an adversary or in a manner likely to reach an adversary’s hands, neither of which occurs when using an AI tool. As the court observed, “no cited case orders the production of what Defendants seek here: a litigant’s internal mental impressions reformatted through software.”

Notably, because the court was addressing the work-product doctrine (which would, in Canada, map in part onto litigation privilege) rather than attorney-client privilege, it can be argued that a different standard applies for assessing whether certain forms of disclosure eliminate the privilege. Nevertheless, the court’s characterization of generative AI as a tool and its conclusion that interaction with such tools does not destroy privilege, stands in marked tension with the decision reached in Heppner.

Key Takeaways from Both Decisions

  • AI Tools Are Not Lawyers. Both courts agreed that communications with AI platforms cannot, on their own, establish attorney-client privilege. No lawyer-client relationship exists and AI platforms usually expressly disclaim that they provide legal advice.
  • Consumer AI and Confidentiality: A Divided Landscape.  Under Heppner, consumer AI platforms that reserve rights to collect user inputs, train models on submitted data and disclose information to third parties destroy any reasonable expectation of confidentiality. Judge Rakoff likened sharing information with such a platform to discussing legal strategy in a public space. In contrast, the Warner court characterized generative AI platforms as tools rather than persons, holding that disclosure to an AI tool does not constitute a waiver of privilege. Waiver, the court emphasized, requires disclosure to an adversary or someone likely to transmit material to an adversary.
  • Privilege Cannot Be Applied Retroactively. Sharing non-privileged, AI-generated documents with counsel after they are created does not transform them into privileged materials. Confidentiality must exist at the time of creation. This principle was applied in Heppner  and remains a foundational rule.
  • Work-Product Protection: Counsel Involvement Matters. In Heppner, the absence of attorney involvement in the creation of AI-generated materials was fatal to the defendant’s work-product claim. The court noted that the outcome might have been different had counsel directed the defendant to use the AI tool. By contrast, in Warner, the court extended work-product protection to the plaintiff’s AI materials, emphasizing that such materials reflect the litigant’s mental impressions and litigation strategy, regardless of whether counsel was involved. The court also criticized the defendant’s request for these materials as a “fishing expedition” into the plaintiff’s internal thought process.
  • AI Interactions May Be Discoverable ESI. Under Heppner, courts may treat AI prompts, outputs and activity logs as ordinary electronically stored information (ESI) subject to discovery. Opposing counsel may now routinely request “AI prompts and outputs” in discovery. However, the Warner  decision suggests that such requests may be denied as disproportionate or as attempts to obtain protected mental impressions.
  • Heppner: A Slippery Slope? Litigants seeking to leverage the reasoning of the court in Heppner  regarding the confidentiality element of attorney-client privilege may face challenges due to the potential floodgates opened by such reasoning. On its face, the logic underpinning the decision could be extended to virtually any online or SaaS-based digital communication platform routinely used by clients, including Gmail and the cloud-based versions of Outlook and Microsoft Teams. Similar to the privacy policy at issue in Heppner, these platforms’ privacy policies often include terms that allow for the collection of user data for operational purposes and permit disclosure to governmental authorities under certain conditions. If broadly applied, the reasoning in Heppner  could lead to a conclusion that the use of such services inherently eliminates confidentiality, effectively undermining the foundation of attorney-client privilege in the digital age. However, such an expansive interpretation is unlikely to gain widespread judicial acceptance, as it would disrupt modern legal practice and create untenable consequences for privilege protections.
  • Governance Remains the Differentiator. Enterprise-grade AI platforms with contractual confidentiality protections, deployed under counsel’s direction, may present a different analysis. The key factors are supervision, purpose and reasonable confidentiality expectations. The court in Heppner  explicitly left open the possibility that enterprise tools with confidentiality guarantees could support a stronger privilege argument.

Emerging Approaches: Understanding the Divergence

The Heppner  and Warner  decisions reveal that courts are not approaching AI-related privilege questions uniformly. Several factors explain the divergence:

  • The Nature of AI Itself. The courts diverged fundamentally on whether AI should be treated as a third party capable of receiving disclosures that destroy confidentiality (Heppner), or simply as a “tool, not a person” akin to a word processor (Warner).
  • Confidentiality Expectations. In Heppner, the court’s emphasis on platform terms of service as undermining confidentiality expectations has been criticized for its potential to extend to virtually all cloud-based services – an issue discussed above.
  • Work Product and Representation Status. Warner  extended work-product protection to a self-represented litigant’s AI-generated materials, while Heppner denied such protection where AI was used independently of counsel.
  • Policy Orientation. Warner  reflects concern that requiring disclosure of AI-generated materials would “nullify work-product protection in nearly every modern drafting environment,” while Heppner  applies traditional privilege principles more strictly to new technology.

Relevance for Canada: Lessons from Two Divergent Approaches

The Heppner  and Warner  rulings offer important lessons for Canadians navigating the evolving legal landscape of generative AI. While Canadian privilege doctrine differs from U.S. law in certain respects, the core principles of confidentiality and the requirements for privilege are fundamentally similar. The divergence between these two decisions signals that courts are still grappling with how to apply traditional legal frameworks to generative AI – a challenge Canadian courts will inevitably encounter.

Emerging U.S. jurisprudence suggests that Canadian courts and regulators may soon need to address several threshold questions, including:

  • Should AI tools be characterized as third parties or as tools for privilege purposes?
  • Does disclosure to AI platforms constitute waiver of confidentiality?
  • How does work-product protection apply to client-generated AI materials?
  • Should enterprise-grade AI platforms with contractual confidentiality terms be treated differently than consumer platforms?

Practical Guidance for Canadian Organizations

Given the unsettled nature of the law, Canadian organizations should adopt a conservative approach to AI use assuming that AI interactions may be discoverable. To mitigate risks and maximize the likelihood of privilege and confidentiality protections, organizations should implement the following measures:

  • AI Governance: Establish internal AI policies that prohibit employees from using consumer-grade AI for legally sensitive work. This is particularly critical given the prevalence of “Shadow AI” – the unauthorized use of consumer AI tools by employees – which often occurs without management oversight. However, establishing AI policies alone is not enough; education and training are equally important to ensure employees understand the risks and apply the restrictions.
  • Provisioned Access: Provide employees with enterprise-grade tools that have contractual confidentiality guarantees. As suggested in Heppner, such guarantees may strengthen arguments for maintaining privilege and confidentiality.
  • Lawyer-in-the-Loop Requirements: Mandate that any use of AI for legal-adjacent tasks, such as summarizing meeting notes from a strategy session or analyzing a contract, be supervised by the legal department to ensure compliance with privilege standards.
  • E-Discovery Preparedness: Address AI-generated data in ESI agreements. Privilege logs should clearly document whether AI tools were used at counsel’s direction and under circumstances that support a reasonable expectation of confidentiality.

By implementing these measures, organizations can reduce the risks associated with AI use while leveraging its potential to enhance efficiency and productivity in a legally compliant manner. Davies can assist Canadian organizations in drafting AI policies that align with best practices and market realities as the jurisprudence on this topic continues to evolve.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More