ARTICLE
4 March 2026

AI And… Privilege. Are My Chats With AI Privileged?

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
Can I seek legal advice from AI? Will my conversations with it be privileged such that I can avoid disclosure or discovery in litigation? In this article we consider the potential risks to privilege where AI...
United States New York Technology
Sean Adams’s articles from Gowling WLG are most popular:
  • with Senior Company Executives, HR and Finance and Tax Executives
  • with readers working within the Healthcare, Property and Law Firm industries

Can I seek legal advice from AI? Will my conversations with it be privileged such that I can avoid disclosure or discovery in litigation? In this article we consider the potential risks to privilege where AI is used to give legal advice.

What you need to know about AI chats and legal privilege

  • There are a number of hurdles to clear before advice is privileged, but the starting point is you need a qualified lawyer in the loop to seek or give advice, or in the case of litigation be seeking the information for the purpose of that litigation.
  • No matter how good they are, chatbots aren't lawyers and cannot confer legal professional privilege on the 'advice' they provide. The person using the chatbot as a research tool needs to be the relevantly qualified lawyer, or the purpose of seeking the information from the chatbot needs to be litigation before privilege is even possible.
  • And then the advice needs to be kept confidential ...
  • Publicly available GenAI platforms generally do not provide the confidentiality necessary to benefit from the protection of legal professional privilege.
  • Enterprise GenAI platforms may resolve this confidentiality concern, but their use by non-lawyers still poses risks to legal advice privilege because there is no "lawyer in the loop".
  • Businesses should warn non-lawyer personnel of the potential risks of seeking first review of legal matters using these platforms – as illustrated in a recent US case.

Rapid technological advances in recent years have seen GenerativeAI products becoming integrated into our daily lives, both personal and professional. Many now turn to these products not only as advanced search engines, but to marshal their thoughts and provoke new ones, to play devil's advocate and help sharpen arguments, as companions, life coaches or even quasi-therapists. But care needs to be taken before also turning to AI for legal advice.

Legal professional privilege in England & Wales

Legal professional privilege (LPP) is a common law principle that allows a client to consult a lawyer in confidence, without fear that their communication will be disclosed to others. It has been described as a "fundamental human right", and "a fundamental condition on which the administration of justice as a whole rests"1. There are two limbs of LPP:

  • Legal advice privilege – applies to confidential communications between client and lawyer for the dominant purpose of seeking or giving legal advice.
  • Litigation privilege – applies to confidential communications between client and lawyer (or between either of them and a third party) for the dominant purpose of litigation that is current or in reasonable contemplation.

Why GenAI risks waiving privilege, or risks privilege never arising

Taking those brief definitions, there are two principal areas where use of AI may risk waiving privilege:

Confidentiality – confidentiality is necessary, but not sufficient for privilege to attach – i.e. a communication which is confidential may be privileged (provided it also meets the other requirements outlined above); but a communication which is not confidential is never privileged. Where a user chats with an enterprise-grade AI platform, confidentiality will usually not be an issue, as the platform will usually be procured on terms that ringfence user data and protect the confidentiality necessary for privilege to exist (though this is of course subject to the particular terms and conditions of the platform).

The requirement for confidentiality, however, poses potential challenges to the use of freely available 'consumer grade' GenAI platforms, which are often provided on terms which do not preserve confidentiality, and where the user's prompts and inputs may be used to train the model. While in practice this does not usually mean that use of a public AI model risks a user's confidential documents being provided wholesale to the next user (though sophisticated prompt injection attacks may well be able to surface some information), the terms on which information is provided pose a real risk to confidentiality, and so to both forms of privilege. Any user using a public AI tool therefore runs the risk that their inputs and the AI's outputs will be disclosable in related litigation (it also has implications for other areas where confidentiality is key, such as IP filings and trade secrets). Further, the use of such systems has the potential for cross-border transfer of this data, raising a risk that that information may (also) become accessible through disclosure regimes in other jurisdictions.

Lawyer – when it comes to communications between a (human) non-lawyer and AI, those communications also risk waiving legal advice privilege because there is no lawyer involved. In Prudential v Special Commissioner of Income Tax [2013] UKSC 1, the UK Supreme Court held that legal advice privilege only extends to legal advice given by members of the legal profession (and, by statutory extension, to certain other professions – licensed conveyancers, patent attorneys and trademark agents). In that case, the privilege was held not to extend to legal advice given by tax advisers. Given that the common law definition of 'lawyer' for the purposes of privilege has been restricted in this way, it seems highly unlikely (absent parliamentary intervention) that AI would qualify as a lawyer for the purposes of legal advice privilege. This means that even use of an enterprise-grade confidential platform poses potential risks to privilege, if used to seek legal advice.

Taking these considerations together, where legal advice is sought from AI, neither the input nor output will be protected by legal advice privilege. This would mean that those communications may need to be disclosed in future legal proceedings. Subject to the other considerations (not least confidentiality) it may however be possible for communications with AI to benefit from litigation privilege (which does not rely on the involvement of a lawyer). Further, lawyers who use AI in the course of their work may be shielded by the "working papers" doctrine which can protect e.g. research notes and summaries.

Has this been tested in court?

An earlier version of this article stated that this position has not yet been tested in the courts of England & Wales. However, since publication, the decision in UK v Secretary of State for the Home Department (AI hallucinations; supervision; Hamid) [2026] UKUT 81 (IAC) has been published. In that case, although primarily concerned with legal representatives citing "hallucinated" authorities, the Upper Tribunal (Immigration and Asylum Chamber) also observed that: "to put client letters and decision letters from the Home Office into an open source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and thus any regulated legal professional or firm that does so would, in addition to needing to bring this to the attention of their regulator, be advised to consult with the Information Commissioner's Office. Closed source AI tools which do not place information in the public domain, such as Microsoft Copilot, are available for tasks such as summarising without these risks." Meanwhile, in a ruling from New York recently, a Federal court adopted a similar analysis, finding that the absence of confidentiality in a public AI system, and the absence of a human lawyer in the communication chain, were fatal to a claim of attorney-client privilege.

In United States of America v Bradley Heppner 25 Cr.503 (JSR), the Government moved for a ruling that a number of documents which the defendant generated by inputting queries into a commercial AI platform (Claude), and which he then shared with his attorneys, were not protected by US attorney-client privilege. Among other arguments, the Government argued that no attorney was involved in the communication between defendant and AI, and that the defendant voluntarily shared his queries with Claude in circumstances which meant they were not confidential. The Government highlighted that Claude's terms of use expressly said that it collects inputs and outputs, uses these to train the model, and that it may disclose this information to government and third parties. They also noted that that the terms of use expressly disclaimed any attorney-client relationship and stated the tool does not provide legal advice.

In a ruling from the bench (i.e. an extempore decision) on 10 February, the judge concluded the documents were not protected by attorney-client privilege. In a subsequent written memorandum given on 17 February, the judge gave the following reasons for this conclusion:

  • First, the documents were not communications between an attorney and client, the judge finding that privilege requires "a trusting human relationship" – which could not exist between an AI user and an AI platform.
  • Second, the communications were not confidential – not only because Heppner communicated with a third party AI platform, but because the terms of use of that platform made clear that data would be used for training, and could be disclosed to third parties.
  • Third, the judge concluded that Heppner's communication with the platform was not for the purpose of obtaining legal advice – although he later shared Claude's outputs with his lawyers, he was not communicating with Claude in order to obtain legal advice from Claude (not least because Claude's terms of use expressly disclaimed such use). It should be noted that the judge described this last point around the purpose of communication as "a closer call".

Consequently, the communications with Claude were not privileged in the defendant's hands, and were "not somehow alchemically changed into privileged ones" when he provided these pre-existing documents to his attorney.

Finally, the judge considered whether the documents were protected by the work product doctrine which shields an attorney's mental process when preparing a client's case. However, the doctrine afforded no protection here because the documents were not produced at the behest of an attorney, but of Heppner's own volition, and did not reflect his attorneys' defence strategy at the time they were created.

Marshalling thoughts

Returning to the introduction, what does this mean for the increasing number of people who may use AI tools to help organise their thoughts and structure documents? A number of recent LinkedIn posts by legal representatives note for example apparently increasing use of AI by clients to draft instructions to outside counsel. While the posters often lament that AI's tendency towards verbosity means that such instructions may require more work to unravel, fewer note the apparent risk to privilege that this use entails.

Companies should ensure that all non-legal staff are aware of the risks of using AI for advice on legal issues. While in-house lawyers using enterprise AI platforms can be more confident that their use of AI will attract the protection of privilege (in the same way as "working papers" are often protected), they should be alive to, and warn of, the risks of non-lawyers in their organisations (including C-Suite executives) seeking AI advice on legal matters, whether on an enterprise platform or otherwise. Legal matters should be given a broad interpretation covering circumstances including the use of AI for second opinions, or by engineers who might be seeking freedom to operate advice or support with IP filings.

It would be sensible to review internal policies (including IT, confidentiality and trade secret policies) to ensure that they address scenarios where employees might be tempted to use AI for advice on legal matters, and to consider practical measures in place to support those policies.

More broadly, while the broad and rapid adoption of GenAI technology has created fresh challenges for privilege, much of the underlying analysis applies to the creation of documents by non-lawyers in general – the current climate is therefore an opportune moment for in-house teams to emphasise the importance of involving them at an early stage where legal advice may be required. As the judge in the Heppner case concluded:

"Time will tell whether...generative artificial intelligence will fulfil its promise to revolutionize the way we process information. But AI's novelty does not mean that its use is not subject to longstanding legal principles".

What if I don't have an enterprise AI environment?

For those who do not have the protection of a confidential, organisation-ringfenced AI environment, Gowling WLG has a legal tech team focussed on providing clients with advice on legal solutions. Please speak to your usual Gowling contact if you would like to explore practical measures that can be put in place regarding your use of AI in more detail.

Footnote

1 R v Derby Magistrates Court, ex p. B [1995] UKHL 18

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More