ARTICLE
25 February 2026

Analyzing Timely Issues Concerning Generative AI And The Law

AY
A.Y. Strauss

Contributor

With the intellectual depth of a large firm and the personalized touch of a boutique, A.Y. Strauss lawyers offer practical and effective solutions to handle a broad variety of matters for emerging businesses, high-profile, more established companies and high net worth individuals. A.Y. Strauss attorneys provide clients with legal counsel for commercial real estate transactions and litigation, construction contracting, and bankruptcy and corporate restructuring matters.

Our institutional experience and deep industry knowledge are what set us apart.

No privilege is more venerated or fiercely protected than the attorney-client privilege.
United States Technology
A.Y. Strauss are most popular:
  • within Employment and HR, Cannabis & Hemp and Finance and Banking topic(s)
  • with Senior Company Executives, HR and Inhouse Counsel
  • with readers working within the Advertising & Public Relations, Banking & Credit and Basic Industries industries

No privilege is more venerated or fiercely protected than the attorney-client privilege. But in today's modern world, the use of generative artificial intelligence threatens the sanctity of that privilege. Generally speaking, most attorneys have largely been urged to monitor their use of AI platforms so as not to expose confidential client information, and many jurisdictions have imposed sanctions on attorneys for improper usage of AI to prepare legal papers that contain fictional cases and/or holdings of real cases hallucinated by AI.

On the other end of the spectrum, clients are increasingly and unwittingly waiving this bedrock principle through the use of generative artificial intelligence. Attorneys must be careful to educate their clients on the dangers of feeding confidential information into generative AI platforms as it relates to legal matters.

The Use of Gen AI By Attorneys

Many state and federal jurisdictions have had the opportunity to weigh in on artificial intelligence used to prepare legal writing by attorneys. Notably, the New York state appellate courts recently had their first chance to rule on such a case on January 8, 2026, when the New York Appellate Division, Third Department, issued a decision that ordered sanctions against an attorney for submitting briefs to the court that contained AI hallucinations. The hallucinations included fictional citations (i.e., cases that were entirely hallucinated) and real cases with fictional holdings. Ultimately, the Appellate Division entered sanctions in the amount of $5,000 against the attorney who submitted AI hallucinations in his briefs for that conduct, and an additional $5,000 in sanctions ($2,500 against the attorney, $2,500 against the party) for pursuit of a frivolous appeal.

The underlying case, Deutsche Bank National Trust Co. v. LeTennier, was a foreclosure action that had commenced in March 2018 after the defendant defaulted on a note secured by a mortgage on real property. The history of the case involved a series of repeated motions by the defendant, each seeking similar relief and leading to the defendant being branded a "vexatious litigant." The appeal concerned the denial of three of these motions.

The Court noted that the appeal was substantively unremarkable, but became "unconventional" because the defendant's opening brief cited six fictional cases. When the plaintiff moved for sanctions, the defendant opposed sanctions with more fake cases and fictional holdings of existing cases. The defendant's five filings in the appeal included at least 23 fabricated cases, as well as misrepresentations of fact or law from actual existing cases.

In essence, the Appellate Division held that it is permissible for attorneys to use AI in preparing legal papers, but they must be aware of potential pitfalls including the tendency for AI to hallucinate. The Court emphasized the need for oversight by an attorney and urged attorneys not to blindly trust AI, stating that they should review any AI-prepared legal documents in the same way that they are required to verify work from a paralegal, intern or other attorney.

Attorneys in all jurisdictions should be aware of both the benefits of using AI to prepare legal documents and the drawbacks in carelessly relying on generative AI tools without verification. Given the ruling in this new case, New York attorneys should be similarly aware that misuse of generative AI can be sanctionable conduct.

Category Errors Beyond Hallucinations

In other cases, the issue with relying on AI for legal document preparation doesn't lie in it hallucinating cases, but instead in it misrepresenting decisions from past cases.

In November 2025, Thomson Reuters gave a demonstration of their Westlaw Deep Research AI program by asking it a Ninth Circuit evidence question: "Can a laboratory director's opinion be admitted as lay opinion under FRE 701?" Using United States v. Holmes as its citation, Westlaw answered saying "No," and that lab directors are generally expert witnesses under FRE 702.¹

However, that was not what was held in the court's decision in United States v. Holmes. In the actual case, the court did not offer a status-based rule on who would be classified as an expert witness and who wouldn't. While some testimony was admitted as lay opinion, other testimony was treated as expert testimony depending on the specific knowledge used by the witnesses and how it applied solely to this case. The error here lies in Westlaw giving legal advice based off misinterpretation of a case-specific finding and misrepresentation of it as a universal rule.

While AI may be utilized as a tool to perform research that can be helpful to building a case, it's also important for attorneys not to trust the information AI models provide without checking the sources and performing their own research. AI models are trained to find patterns in text, and that pattern recognition can lead to inaccurate interpretations of legal authorities.

The Use of Generative AI By Clients

Within the legal context, generative AI can be alluring to clients seeking to decode complex legal jargon and understand high-stakes legal documents in a matter of minutes. Clients are increasingly leveraging generative AI tools in a myriad of ways from uploading drafts of legal documents for edits to asking these platforms to interpret confidential email communications from their attorneys. From a client's perspective, this activity may seem innocuous, but the reality may be quite different.

Attorney-Client Privilege At Risk of Being Waived

The attorney-client privilege is a legal doctrine that protects confidential communications between attorneys and their clients that relate to the client's seeking of legal counsel. This privilege, however, does not grant clients the unfettered right to withhold information and is construed narrowly. After all, the privilege merely protects disclosures deemed "necessary to obtain informed legal advice . . . ."² Voluntary disclosures to third parties of confidential communications generally constitutes a waiver of the attorney-client privilege. Limited exceptions apply to communications "made through necessary intermediaries and agents." "Necessary intermediaries and agents"³ have included psychiatrists, handwriting experts, and engineering firms, among others. The attorney-client privilege is generally not waived under those circumstances, because, there, "disclosure to a third party is necessary for the client to obtain informed legal advice . . . ."⁴

While generative AI platforms may not seem like a third party on the surface, the reality is that they may be considered so by the courts. A number of these platforms are operated by outside commercial companies whose privacy and confidentiality policies do not uphold the parameters of the attorney-client privilege. In fact, many of these AI platforms openly reserve the right to disclose personal data.⁵ Clients, therefore, cannot reasonably expect that these platforms will keep these communications confidential.

U.S. District Court Rules That AI Use Waives Attorney-Client Privilege

On February 10, 2026, the first major federal ruling on a client's use of consumer AI tools in legal documents waiving attorney-client privilege was given by Judge Jad Rakoff of the U.S. District Court for the Southern District of New York in United States v. Heppner.

The defendant, Bradley Heppner, was arrested in November 2025 on securities and wire fraud charges. Heppner uscateed Anthropic's Claude chatbot to prepare defense strategy documents which he shared with counsel. The government then seized these AI-assisted documents and moved to compel their production, as Anthropic's consumer privacy policy notably permits the company to use inputs for model training and to disclose data to governmental authorities. In this case, disclosing these confidential documents to an AI chatbot effectively resulted in the Court ruling that the defendant had waived attorney-client privilege by disclosing to a third-party, "AI, which had an express provision that what was submitted was not confidential."

Do Generative AI Platforms Give Clients the Advice They Need?

While generative AI is trained on data and can contain a wealth of knowledge, gen AI platforms cannot serve as a replacement for agents needed to advise clients or otherwise facilitate a lawyer's ability to do so. These platforms do not possess the necessary expertise, training, or ethical constraints that inform or bind traditional agents. While these AI platforms can provide general information that clients find useful, that information must be scrutinized for accuracy, which is a time-consuming and difficult task for a client to undertake.

Ultimately, a client who divulges confidential information to one of these AI platforms risks waiving the attorney-client privilege. Waivers of this privilege can be catastrophic and affect the trajectory or outcome of a particular case. Before trading valuable protections for convenience, clients should proceed with caution when using generative AI platforms and should consult their attorneys before employing these platforms for legal advice.

Footnotes

1. See Westlaw Deep Research and the cost of category errors: https://citation.al/posts/westlaw-deep-research-cost-of-category-errors/

2. Westinghouse Elec. Corp. v. Republic of Philippines, 951 F.2d 1414, 1424 (3d Cir. 1991).

3. Symetra Life Ins. Co. v. JJK 2016 Ins. Tr., CV1812350MASZNQ, 2019 WL 4931231, at *2 (D.N.J. Oct. 7, 2019)

4. Westinghouse Elec. Corp., 951 F.2d at 1424.

5. See https://openai.com/policies/row-privacy-policy/ (accessed February 4, 2026 at 12:55 p.m.) (providing that OpenAI OpCo, LLC may disclose personal data to vendors, service providers, affiliates, government authorities, business account administrators, amongst other third parties for a range of reasons).

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More