ARTICLE
1 August 2025

Artificial Intelligence, Real Liability: The Legal Risks Of "AI-Washing"

CW
Cadwalader, Wickersham & Taft LLP

Contributor

Cadwalader, established in 1792, serves a diverse client base, including many of the world's leading financial institutions, funds and corporations. With offices in the United States and Europe, Cadwalader offers legal representation in antitrust, banking, corporate finance, corporate governance, executive compensation, financial restructuring, intellectual property, litigation, mergers and acquisitions, private equity, private wealth, real estate, regulation, securitization, structured finance, tax and white collar defense.
As corporates touting market-leading AI credentials continue to attract significant investor interest and outsized valuations, companies are increasingly facing allegations that they have engaged in "AI washing" – the term used to describe misrepresentations as to the usage.
United States Technology

As corporates touting market-leading AI credentials continue to attract significant investor interest and outsized valuations, companies are increasingly facing allegations that they have engaged in "AI washing" – the term used to describe misrepresentations as to the usage, capability, or in some cases existence of AI in their operations, products or services.

Commercial incentives to overstate AI capabilities are compounded by board pressure to "act on AI", despite the fact that most boards lack adequate AI-related expertise. This challenge is exacerbated by widespread confusion as to the definition of AI, which encompasses a broad field of capabilities from machine learning to large language models and multi-modal generative AI.

However, the perceived benefits and pressures to over-promise on AI may be fleeting, and carry significant regulatory and litigation risks, particularly for public companies.

Securities Litigation

Investors in the securities of UK or US listed companies who have suffered loss as a result of a misstatement can seek redress through the courts, using specific statutory mechanisms designed for securities claims. In the US such actions – commonly referred to as "securities litigation" – is big business, and an increasing volume of such claims now allege AI-misstatements.

The number of AI-related filings more than doubled from 2023 to 2024, and the Securities Class Action Clearinghouse records 47 AI-related securities class actions filed since 2020. Moreover, research shows that securities actions based on AI washing are significantly more likely than other such actions to survive motions to dismiss.

In some cases, AI-washing serves as a useful "add-on" to complaints that allege a series of misrepresentations. However, in many cases AI washing is the sole or dominant complaint. A 2023 securities filing against Tesla, Inc. alleged that it "significantly overstated" the efficacy, viability, and safety of its Autopilot AI technology.

Allegations of more brazen conduct include a 2024 securities class action against Innodata and its executives for falsely claiming it used proprietary AI to digitise data when it was actually relying on manual data entry by offshore workers. The claim followed allegations and short-selling by Wolfpack Research, which led to a 30% crash in the price of Innodata's securities.

In the UK, securities actions are in their relative infancy, but there is significant scope for claims based on AI-washing, particularly for any misstatements in listing particulars or prospectuses where claimants do not need to prove reliance. Early securities actions have focused on financial misstatements, with more recent claims alleging ABC and ESG misrepresentations. Allegations of AI-related misstatements will undoubtedly feature in future claims and are likely to prove attractive to claimants and litigation funders – an industry that has grown significantly in recent years – because of the significance that investors often place on AI-related claims.

Regulatory Enforcement

Securities regulators are closely monitoring public filings for signs of AI-washing. In the US, AI is an examination priority for 2025, and the SEC has repurposed its Crypto Assets and Cyber Unit (now the "Cybersecurity and Emerging Technologies Unit") to reflect a broadened mandate, including AI. There is, however, little in the way of new regulation. Agencies in the US and UK have been clear that existing rules relating to false and misleading statements will be applied as needed to deal with this risk.

Speaking in March 2024, Gary Gensler stated that: "Public companies should make sure they have a reasonable basis for the claims they make and yes, the particular risks they face about their AI use, and investors should be told that basis."

Within weeks of that speech, the SEC had brought charges against three parties for false and misleading claims about the way that they were using AI, and AI-washing enforcement continues in 2025. In the UK, the FCA has likewise made clear that existing market abuse regulations prohibit AI washing. However, FCA investigations, if underway, are unlikely to be made public until advanced stages or formal action is taken.

Governance of AI statements

AI-washing litigation and enforcement risks continue to grow alongside mass adoption of AI within organisations, and continuing significant commercial incentives to overstate AI capabilities. Companies on both sides of the Atlantic, particularly those positioning themselves as AI-enabled, would be well advised to minimise the risks of AI-washing by considering the following steps, many of which simply require adaptation of existing risk-control mechanisms:

(i) Clarify responsibility for AI governance and AI-related disclosures within the organisation. This starts at Board level. The Board must receive meaningful, technically-informed reporting on AI activities and risks – including AI washing. There should be adequate AI (not just IT) expertise on the Board. At present, few boards have directors with deep AI or data science expertise.

(ii) Consider setting up an AI Governance Board subcommittee or AI working group to consider AI-related opportunities and risks. Audit and Risk, and Disclosure Committee terms of reference should be amended to include oversight of the identification and management of AI-washing risks.

(iii) Map the use of AI across the organisation, and ensure that AI-related risks — including those arising from external disclosures — are fully integrated into the organisation's enterprise risk management (ERM) framework i.e. the overarching structure for co-ordinating governance, control and assurance processes.

(iv) Ensure AI and related terms (including non-AI terms that could be conflated with AI), should be defined and used consistently. A straightforward but effective step to consider is compilation of a single approved AI Glossary, applicable to both internal and external usage of AI terminology. It is critical that AI is not conflated with other processes such as rule-based automation, or processes relying on human intervention. The glossary is likely to require updating over time as the technology and related terminology evolves.

(v) Maintain a defensible audit trail for AI-related statements to assist with substantiating historic AI claims in any subsequent litigation, and demonstrating compliance with securities rules in any investigation. An AI claims register can be used to map public statements to evidence, including documentation of technical architecture and regular model testing. Records of internal approvals for AI statements should also be maintained. Those vetting AI-related assertions must have the requisite technical expertise to do so.

(vi) Include AI disclosures in the organisation's audit plan, with periodic audit of the compliance process for AI disclosures.

(vii) Clearly disclose AI limitations alongside capability claims and ensure completeness of risk statements to include AI-related risks and the potential for securities and other litigation that may result. Both U.S. securities laws and UK listing rules require that material risks be disclosed to investors, and robust risk disclosures can strengthen a company's defence to securities actions that allege AI-washing.

Previously published in AI Journal.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More