In recent months, I have noticed a significant uptick in clients reaching out to me after first turning to artificial intelligence tools like ChatGPT, Claude, or Gemini to research their legal issues, strategize about their cases, or simply gather their thoughts before emailing me. I understand the appeal: these tools are accessible, conversational, and often produce impressively detailed responses that can feel remarkably like receiving advice from a knowledgeable professional.
But I must be direct with you: using AI to develop your legal strategy can seriously harm your case in ways you may not anticipate. A groundbreaking federal court ruling from earlier this month underscores exactly why clients should be extremely cautious before typing sensitive legal information into consumer AI platforms.
The Heppner Decision: A Wake-Up Call
On February 10, 2026, Judge Jed S. Rakoff of the U.S. District Court for the Southern District of New York issued what appears to be the nation's first ruling squarely addressing the privilege status of AI-generated legal documents. The case, United States v. Heppner, involved a criminal defendant, Bradley Heppner, a Dallas financial services executive charged with securities and wire fraud. After receiving a grand jury subpoena and retaining defense counsel at Quinn Emanuel, Heppner independently used Anthropic's AI tool, Claude, to research legal questions, synthesize information from his attorneys, and outline a potential defense strategy. He generated 31 documents containing his prompts and the AI's responses, which he later shared with his lawyers.
When the FBI arrested Heppner and executed a search warrant at his home, agents seized electronic devices containing these AI-generated documents. The government moved to compel their production, and Judge Rakoff agreed: the documents were neither protected by attorney-client privilege nor shielded by the work product doctrine.
Why AI Conversations Are Not Privileged
Judge Rakoff's written opinion rested on fundamental privilege principles.
- An AI tool is not your lawyer—it has no law license, owes you no duty of loyalty, and cannot form an attorney-client relationship. Discussing your legal matters with an AI is legally no different from talking through your case with a friend.
- There is no reasonable expectation of confidentiality. Anthropic's privacy policy, like OpenAI's, expressly states that user prompts may be used to train the AI model and that the resulting model may be disclosed to third parties. Paying for a subscription does not change this—only enterprise-tier agreements offer contractual confidentiality protections.
- Sending AI documents to your lawyer does not retroactively make them privileged. Pre-existing, non-privileged materials do not become privileged simply because they are later shared with counsel.
Work Product and Privilege Waiver Concerns
The work-product doctrine likewise failed to protect Heppner's AI documents because he created them on his own initiative, not at his lawyers' direction. Perhaps most troubling, Judge Rakoff agreed that sharing privileged communications with a third-party AI platform may constitute a waiver of privilege over the original attorney-client communications themselves. In other words, by inputting what your lawyer told you into ChatGPT or Claude, you may be destroying the privilege that originally protected those communications.
Why This Matters for Civil Litigation and Employment Cases
While Heppner arose in a criminal prosecution, its reasoning applies equally to civil litigation, employment disputes, workplace investigations, and regulatory inquiries. Any documents you generated using AI to analyze your situation or formulate arguments may be discoverable by the opposing party. This is not a theoretical concern; it is now the law in the Southern District of New York, and other courts are likely to follow suit.
A Potential Path Forward: Enterprise Tools and Attorney Direction
It's possible that the use of an enterprise AI tool—which does not train on inputs and maintains confidentiality of inputs—could be viewed differently from the use of consumer-grade tools, and that the use of an enterprise tool with otherwise privileged materials should not result in the loss of privilege. For that reason, the use of enterprise versions of AI tools, while not guaranteeing privilege, should bolster privilege claims.
To bolster work-product claims, clients and other non-lawyers using AI tools at counsel's direction to assist with a legal case should make it clear in their prompts that they are acting at counsel's direction. Courts have found that the work product doctrine can protect AI-generated content when the prompts and tools used meet the criteria for asserting privilege. This was precisely what Heppner failed to do—he acted on his own initiative rather than at his attorneys' direction.
My Advice
Do not use consumer AI tools to strategize about your case without first consulting with your attorney. The conversational interface creates a “dangerous illusion of privacy.” However, every prompt is a potential disclosure, and every output is a potentially discoverable document. If you have already used AI to discuss your legal matters, disclose this to your attorney immediately so we can assess the risk.
The Bottom Line
Judge Rakoff's decision delivers a clear message: the attorney-client privilege protects communications with your lawyer, not conversations with your AI.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.
[View Source]