- with readers working within the Law Firm industries
- within International Law topic(s)
If your company is involved in litigation, or regularly faces the possibility of it, a recent federal court ruling should change the way you think about AI tools.
A New York federal court just issued the first ruling to tackle head-on whether conversations with a public AI chatbot can be protected by attorney-client privilege or the work product doctrine. The short answer: they can't.
Bottom line: If you, your employees, or anyone at your company pastes legal advice, investigation materials, case strategy, or other sensitive information into one of these tools, you could be handing that information directly to the other side during a lawsuit. And there may be nothing your lawyers can do to get it back.
The Waiver You Can't Take Back
Here's what happened. A target of a federal investigation used a public AI platform to create strategy-focused "reports" about the facts and law of his case. Federal agents later seized electronic devices containing those AI exchanges during a search. The defendant claimed privilege and work-product protection, arguing that because he eventually shared the AI outputs with his lawyers (and had fed in information he'd gotten from his lawyers), the materials should be shielded.
The court said no, for three straightforward reasons that every business should understand:
- The AI isn't your lawyer. There was no attorney–client relationship between the defendant and the AI platform. Privilege protects confidential communications with your attorney—not conversations with software. Chatbots don't qualify, no matter how sophisticated the legal analysis they produce may appear.
- There was no real expectation of confidentiality. The AI provider's terms of service allowed the company to collect user inputs and outputs, use them to train its models, and even disclose them to third parties, including regulators. Under those conditions, nobody can reasonably claim they expected the conversation to stay private.
- The chats weren't about getting legal advice from counsel. The defendant initiated these conversations on his own. The AI tool itself disclaimed giving legal advice. That's a far cry from the kind of attorney-directed communication privilege is designed to protect.
The court also shut down the work product argument. Work product protection is meant to shield a lawyer's thinking and strategy. These documents were created by the client, on his own, using a public tool, not prepared by or at the direction of counsel.
The court went on to make clear that even information that starts out privileged loses its protection once it is pasted into a public chatbot.
The Waiver Reaches Further Than You Think
You don't need to face a criminal investigation for this ruling to matter to your business. While the court's findings were narrowly tied to the facts of a criminal case, the legal principles it applied (privilege requires an attorney-client relationship, confidentiality must be maintained, and sharing with third parties waives protection) are the same principles that govern every commercial lawsuit, regulatory enforcement action, and internal investigation your company will ever face.
Your ability to protect sensitive information during discovery and shield your litigation strategy from opposing counsel depends entirely on keeping privileged and confidential information within the boundaries the law requires. If your company is a party to a lawsuit, is responding to a government investigation, or reasonably anticipates either, the implications of this ruling are immediate and practical.
Attorney-Client Privilege Doesn't Just Happen
There must be an actual attorney-client relationship; the communication must be made for the purpose of obtaining legal advice; and the parties must maintain a reasonable expectation of confidentiality for attorney-client privilege to apply.
Typing a question into a public chatbot or Generative AI tool checks none of those boxes, no matter how legal the subject matter feels.
Under the court's framework, every AI chat about a legal issue that happens outside your relationship with your attorney is a potential exhibit waiting to be produced—and opposing counsel will be looking for exactly these kinds of materials.
Think about what your employees may be doing with chatbots right now: asking how to respond to a demand letter your company received, evaluating whether a terminated employee might sue, running the facts of a workplace injury through a chatbot to predict how a court might rule, or even pasting the key terms of a proposed settlement into a chatbot to see whether the deal is favorable.
None of these conversations involves your attorney, none carries a reasonable expectation of confidentiality, and under the court's reasoning, every single one of them could end up as an exhibit attached to a motion, introduced at a deposition, or produced in response to a discovery request.
Attorney-Client Privilege is Not Permanent
Privilege protection is not a permanent label that follows information wherever it goes. The moment privileged content is shared with a public AI tool, that act of sharing constitutes a waiver of privilege, making the information fully discoverable by adversaries, regulators, and opposing parties.
The court's waiver analysis leads to an uncomfortable conclusion for any business in litigation: any person at your company who copies privileged material into a public AI tool, whether to summarize it, brainstorm arguments, or reorganize facts, may be stripping that material of its protection in real time, potentially undermining your entire litigation position.
Consider how easily this can happen at your company:
An employee pastes a litigation hold memorandum into a chatbot to "clean up the language."
An executive uploads a draft settlement agreement to have the AI summarize the key terms before a mediation.
A manager copies your outside counsel's candid case-assessment email into a chatbot to prepare talking points for the board.
A human resources director runs the details of a discrimination complaint through a chatbot to draft an internal response.
In each instance, the person may believe the interaction is harmless, but under this ruling, those exchanges could be fair game in discovery, introduced at depositions, or compelled by court order, and your company would bear the consequences.
Free or Paid: the Attorney-Client Privilege Problem is the Same
In this case, the court zeroed in on the AI provider's specific privacy policy, which allowed it to collect user inputs and outputs and use that data to train its models and disclose it to third parties. The court's reasoning isn't limited to free or open-access tools. Following its logic, most AI platforms could present the same problem as long as their terms of service reserve the right to review, train on, or disclose user data.
That means paying for a premium subscription or even a corporate license may not automatically fix the confidentiality issue.
The Growing AI Discovery Risk
Going forward, you should expect opposing counsel and regulators to ask pointed questions about AI usage in depositions, custodian interviews, and subpoena negotiations.
Imagine your CEO being asked during a deposition, "Did you or anyone at your company use any AI tools to analyze documents, prepare for this litigation, or discuss the subject matter of this lawsuit?" Or receiving a subpoena that specifically demands "all communications with AI-based tools, including prompts, inputs, and outputs, related to [topic]."
These are no longer hypothetical scenarios; they are the natural next step after this ruling, and your company needs to be prepared with an answer.
For your company, this means the scope of what the other side can demand in discovery just expanded significantly. Your litigation hold obligations now need to account for AI-generated content. Your document preservation protocols should encompass chatbot histories, AI prompts, and outputs stored on employee devices, cloud accounts, and third-party AI platforms. And if your employees have been using these tools without oversight, you may already have a preservation problem you don't yet know about.
The time to address these risks is before your next case heats up, not when a discovery request lands on your desk
Three Steps Your Business Should Take Now to Protect Your Litigation Position
1. Set Clear Internal Rules and Explain Why They Matter.
Work with your in-house or outside counsel to create or update a straightforward company-wide rule: no one uses public, consumer-grade AI tools for anything related to legal advice, attorney communications, pending or anticipated disputes, investigations, audits, or trade secrets.
For example: "Employees may not input, paste, upload, or otherwise transmit any legal advice, attorney communications, investigation materials, draft pleadings, audit findings, or trade secrets into any AI tool that is not expressly approved by the Legal Department." A clear, concrete rule is far more effective than a vague directive to "use caution."
For companies regularly involved in litigation, this rule should be incorporated into your standard litigation-readiness protocols and reinforced whenever a new litigation hold is issued.
Every litigation hold notice should expressly address AI tools and make clear that employees must preserve any AI-generated content related to the dispute—and must stop using public AI tools to analyze, discuss, or prepare materials related to the matter.
2. Build Safe AI Channels and Keep Your Lawyers in the Driver's Seat.
If your business uses AI, deploy an enterprise AI solution that contractually and technically prevents model training on your data, blocks provider access and disclosure, and keeps all interactions within your controlled environment.
Have counsel review and negotiate your AI vendor agreements to ensure they include robust confidentiality, data segregation, and audit rights—and make sure those agreements will withstand scrutiny if opposing counsel challenges them in a discovery dispute.
But don't stop at the technology. The court hinted that the outcome might have been different if a lawyer had directed the defendant's use of AI. That means attorney direction isn't just a best practice; it may be the key to preserving privilege over AI-assisted work product.
Require that any AI-assisted work involving legal content at your company happen only under your counsel's direction, within a documented workflow designed to maintain the privilege. For companies in active litigation, this point cannot be overstated: if case strategy, witness preparation materials, expert analyses, or damage calculations are being developed with AI assistance, your outside counsel must be directing and documenting that process from the outset.
Without that documented attorney involvement, the work product doctrine may offer no protection at all.
3.Train Your People, Especially Before and During Litigation.
The risk this case exposed isn't just about what AI produces; it's about what your employees put into AI platforms. Every prompt, every uploaded document, every pasted paragraph is a potential disclosure to a third party, and in litigation, that disclosure can become the other side's evidence.
Build a "pause before you paste" culture at your company: before anyone touches a chatbot, they should ask whether what they're about to type or upload is privileged, confidential, or related to any pending or anticipated legal matter. If the answer is yes, or even maybe, they should stop and consult counsel first.
Make AI governance a part of your regular compliance training so that these rules are reinforced, not just read once during onboarding.
Run periodic tabletop exercises—short, scenario-based sessions where teams walk through realistic situations drawn from this ruling's facts, such as "an employee pastes a draft settlement term sheet into ChatGPT" or "a manager asks a chatbot whether the company has liability for a customer's injury."
When litigation is pending or reasonably anticipated, specifically remind key custodians and business teams that AI-generated content is subject to preservation obligations and discovery, and that careless use of these tools can create the very evidence that opposing counsel will use against you at trial, in a motion for summary judgment, or at the settlement table.
Our Employee Used a Public AI Chat for a Litigation Matter. Now What?
This ruling didn't necessarily break new legal ground; it just applied longstanding privilege principles to a new technology and reached the conclusion most lawyers would have predicted. But that's exactly what makes it so important. The court confirmed that public AI tools are third parties, that sharing information with them can waive privilege just as easily as forwarding a confidential email to a stranger, and that no amount of after-the-fact attorney involvement can undo the damage.
For any company that has ever litigated a case where a single document made the difference, that conclusion should be sobering.
Until more courts weigh in, the playbook for your business is clear: set enforceable internal rules that explain the consequences, build safe AI channels with real confidentiality protections and your lawyers directing the process, instill a "pause before you paste" culture, and train your people so that no one at your company ends up like the defendant in this case, sitting on a pile of AI chat transcripts that just became the other side's best evidence.
The stakes for businesses in litigation could not be higher. A single careless prompt could waive the privilege over the document your entire defense depends on, shift the balance in settlement negotiations, or hand opposing counsel a roadmap to your legal strategy.
Companies that act now can keep reaping AI's benefits without compromising their position in court.
Those that wait are gambling that none of their employees will ever type the wrong thing into the wrong tool at the wrong time, and this ruling shows exactly how that bet plays out.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.