ARTICLE
2 March 2026

Federal Court Rules Documents Prepared Using Public AI Tools Not Protected By Attorney-Client Privilege

BT
Barnes & Thornburg LLP

Contributor

In a changing marketplace, Barnes & Thornburg stands ready at a moment’s notice, adapting with agility and precision to achieve your goals. As one of the 100 largest law firms in the United States, our 800 legal professionals in 23 offices put their collective experience to work so you can succeed.
The case involves Bradley Heppner, the former CEO of Beneficient, a Texas-based financial services company. Federal prosecutors have charged Heppner with fraud in connection with an alleged scheme...
United States Technology
Barnes & Thornburg LLP are most popular:
  • within Litigation and Mediation & Arbitration topic(s)

Highlights

  • A recent ruling from the U.S. District Court for the Southern District of New York carries significant implications for businesses and individuals who use publicly available artificial intelligence (AI) tools.
  • In U.S. v. Heppner, Judge Jed S. Rakoff held that documents prepared by a criminal defendant using a public AI tool and subsequently shared with his attorneys were not protected by attorney-client privilege.

The Ruling

The case involves Bradley Heppner, the former CEO of Beneficient, a Texas-based financial services company. Federal prosecutors have charged Heppner with fraud in connection with an alleged scheme involving GWG Holdings, a publicly traded company that had invested heavily in Beneficient. According to the government, Heppner misappropriated approximately $150 million for his personal benefit before GWG filed for bankruptcy, resulting in roughly $1 billion in losses to investors. Heppner has denied guilt and contested the charges, arguing in part that the indictment improperly attributes GWG's bankruptcy to his conduct.

The privilege dispute at issue arose after Heppner used a publicly available AI platform to generate 31 documents pertaining to his defense and then shared those materials with counsel. When the government sought to compel production, the defense asserted two traditional grounds for protection: attorney-client privilege and the work-product doctrine. The court's rejection of both arguments offers important guidance on the limitations of these protections in the context of AI-assisted document preparation.

On the attorney-client privilege question, the court found no basis for protection. Attorney-client privilege traditionally attaches to confidential communications between a client and an attorney made for the purpose of obtaining legal advice. Documents created by a client using a third-party AI tool, even if later shared with counsel, do not satisfy this requirement. The act of transmitting self-generated materials to an attorney does not retroactively cloak those materials with privilege.

The work-product doctrine posed a closer question. The defense argued that the documents incorporated information previously conveyed by counsel and therefore reflected attorney strategy and mental impressions entitled to protection. The court disagreed. Work-product protection generally extends to materials prepared by or at the direction of an attorney in anticipation of litigation. Because the documents at issue were prepared by the defendant himself, not by counsel, they fell outside the doctrine's scope. This aspect of the ruling suggests that clients cannot bootstrap work-product protection onto their own materials simply by incorporating information received from their attorneys.

The most consequential aspect of the ruling, however, concerns the effect of public AI platforms on confidentiality. The court emphasized that the terms of service governing most consumer AI tools explicitly disclaim any confidentiality in user inputs. As a result, users have no reasonable expectation of privacy in the information they provide to these systems. From a privilege standpoint, entering information into a public AI platform constitutes disclosure to a third party without confidentiality protections, which operates as a waiver. This principle has far-reaching implications: even if a communication would qualify for privilege, processing it through a public AI tool may destroy that protection.

Key Implications for Businesses

This ruling underscores an important and often overlooked risk in using AI: When confidential information is entered into a publicly available AI platform, that information may lose its confidential character. The implications extend well beyond the attorney-client privilege context and may affect a wide range of protected information.

Attorney/Client Privilege: Recent headlines have hammered home the risk of hallucinated citations and fake law working their way into legal briefs. Though many cautionary tales exist, attorneys continue to overestimate the accuracy of developing AI technology, often using AI platforms not specifically designed for legal professionals.

The ruling in Heppner highlights another, more insidious risk: placing confidential or privileged information into a public AI platform has the same legal implications for that information as if it were posted on a public social network. Attorneys need to know the risks, the ways the data they enter these platforms will be used, and more importantly, emphasize to clients that using such platforms could waive the attorney-client privilege.

Trade Secrets and Proprietary Information: Businesses routinely protect competitive advantages through trade secret protections, which require that information be kept confidential. Inputting trade secrets or proprietary business information into public AI tools could jeopardize those protections if courts determine the information was disclosed to a third party without adequate confidentiality safeguards.

Protected Health Information: For healthcare organizations and their business associates, this ruling reinforces the critical importance of keeping protected health information out of public AI platforms. HIPAA compliance depends on maintaining the confidentiality of PHI, and use of consumer-grade AI tools could constitute an impermissible disclosure.

Other Confidential Information: Similar concerns apply to information subject to contractual confidentiality obligations, non-disclosure agreements, or regulatory protections. Organizations should evaluate whether the use of public AI tools is consistent with their confidentiality commitments.

Practical Guidance

Organizations should take this opportunity to review their policies and practices regarding AI tool usage. Employees and personnel should be educated about the risks of entering confidential, privileged, or proprietary information into public AI platforms. Many users do not fully appreciate that information submitted to these tools may be stored, used to train models, or otherwise retained by the platform provider.

Organizations should implement clear policies identifying what types of information may and may not be entered into public AI tools. Where AI tools are needed for legitimate business purposes involving sensitive information, organizations should consider enterprise solutions that offer enhanced confidentiality protections.

Legal and compliance teams should be consulted before using AI tools in connection with pending or anticipated litigation, regulatory matters, or other sensitive legal contexts.

Conclusion

The Heppner ruling serves as an important reminder that the convenience of AI tools must be balanced against the risks they pose to confidentiality and privilege. As courts continue to grapple with these issues, organizations should consider adopting a cautious approach and ensure their personnel understand the potential consequences of using public AI platforms for sensitive matters.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More