ARTICLE
24 February 2026

How Will Courts Address Potential Liability Against AI Companies?

BT
Barnes & Thornburg LLP

Contributor

In a changing marketplace, Barnes & Thornburg stands ready at a moment’s notice, adapting with agility and precision to achieve your goals. As one of the 100 largest law firms in the United States, our 800 legal professionals in 23 offices put their collective experience to work so you can succeed.
With the proliferation of artificial intelligence tools, there are competing views of how, or even if, liability standards should apply to these technologies.
United States Technology
Kaitlyn E. Stone’s articles from Barnes & Thornburg LLP are most popular:
  • with Inhouse Counsel
  • in United States

Highlights

  • With the proliferation of artificial intelligence tools, there are competing views of how, or even if, liability standards should apply to these technologies.
  • Lawsuits and proposed federal legislation are seeking to set guardrails on AI technologies by holding developers liable for certain outputs.
  • Tracking how these legal developments progress will be informative for the future of AI development. 

Courts and regulators have been grappling with how, and whether, to apply standards of liability to artificial intelligence tools. Two examples have emerged in recent months. The first, a lawsuit filed alleging harm from deepfake images — St. Clair v. X.AI Holdings Corp, 1:26-cv-00386 (S.D.N.Y.). The second, a bill proposed in the Senate, seeks to adopt product liability standards to the context of AI tools. How the court and the Senate approach these developments will be instructive on potential lawsuits around AI. 

The Lawsuit 

In the lawsuit, the plaintiff alleges that an AI chatbot, responding to user prompts, altered photographs to depict her in sexually explicit and otherwise demeaning images. The lawsuit does not name or identify the users who supplied these prompts. Rather, it only names the AI tool's creator.

The plaintiff then alleges causes of action that are typical in a traditional product liability complaint:

  • Strict liability design defect
  • Strict liability manufacturing defect
  • Strict liability failure to warn
  • State statutory deceptive business practices
  • Negligence
  • Unjust enrichment 
  • Public nuisance 

However, the complaint also alleges causes of action that are not typically seen in standard product liability cases: intentional infliction of emotional distress and violation of statutory privacy rights.

It will be important to monitor how the court approaches the claims asserted in this lawsuit as the court's actions may be seen as instructive to others in this emerging field.

Interplay with AI LEAD Act

The lawsuit raises issues that legislators have been considering, as evidenced by the introduction, in September, of a bill in the Senate entitled Aligning Incentives for Leadership, Excellence, and Advancement in Development Act (AI LEAD Act).

The AI LEAD Act contains an expansive definition of an “artificial intelligence system.” It also seeks to impose traditional product liability standards on AI developers — such as design defect, failure to warn, and breach of express warranty — much like St. Clair's lawsuit. Moreover, the AI LEAD Act also imposes liability on the “deployers” of AI who misuse the tools and allegedly harm another.

Lawsuits under the AI LEAD Act would be brought by individuals, a class of individuals, the U.S. Attorney General, or any state attorney general. The AI LEAD Act also creates a federal cause of action, so any such suit may be brought in federal court. 

Innovation vs. Guardrails: The Next Phase of AI Governance 

Liability surrounding alleged harms caused by AI is an emerging area that is not clearly defined and is the subject of competing interests. Some stakeholders want to allow AI development to progress with limited guardrails that hinder innovation. Others want to impose legal and regulatory frameworks that seek to ensure safe and ethical development of these tools.

Which side, if any, ends up winning the debate has the potential to greatly affect the future of AI. It will be important to stay abreast of these latest developments.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More