- within Technology topic(s)
- with readers working within the Construction & Engineering industries
- within Strategy and Compliance topic(s)
- with Senior Company Executives and HR
When an artificial intelligence (AI) system generates a brand name, logo, or slogan, who owns the right and can those rights be registered and enforced? As AI agents increasingly influence or even make purchasing decisions on behalf of customers, does intellectual property law still function as intended in an AI-driven marketplace? At INTA 2026, a panel of leading practitioners and scholars will tackle these questions head-on, drawing on landmark litigation, cutting-edge scholarship, and fresh data from a 75-jurisdiction global survey.
Debating DABUS
The debate around DABUS is an issue in point. It stands for Device for the Autonomous Bootstrapping of Unified Sentience, an AI system known for being at the centre of a series of international test‑case patent applications. In those applications, so-called DABUS was clearly listed as the non-human creator of the claimed inventions.
The DABUS cases have prompted courts worldwide to ask whether AI can be an inventor or author-questions that go to the heart of patent and copyright law. For intellectual property practitioners, the implications extend further: if AI generates a brand name, logo, or slogan, can those assets be registered and enforced? When AI tools trained on existing marks produce something confusingly similar, who is liable? And does a mark conceived by an algorithm have the same distinctiveness or protectability as one created by a human mind?
Not all AI involvement is equal, there's a spectrum from AI generating a mark from scratch to AI suggesting options that a human selects and refines. Practitioners need to help clients understand where their creative process falls on that spectrum, and how to structure workflows to stay on the protectable side.
Regulatory vigilance
Courts aren't the only ones asking hard questions. Regulators are now bringing the same scepticism to advertising, treating "AI-powered" claims not as puffery but as statements that need substantiation. But here's where AI washing differs from greenwashing: regulators aren't just policing false claims-they're requiring affirmative disclosure.
Laws like California's AB 2013 mandate that generative AI developers publicly disclose training data details, including whether copyrighted material or personal information was used. The EU AI Act goes further, requiring AI-generated content to be badged accordingly in certain contexts. Saying nothing is increasingly not an option.
The bottom line means AI claims must be treated in a similar way to health claims. Verify them before you make them, and check whether disclosure is required even if you weren't planning to say anything at all.
The challenge of dark patterns
AI can now personalise manipulative design for each individual user and can even generate deceptive interfaces autonomously — so-called "emergent dark patterns" that no human designed. A 2024 joint FTC review found 76% of surveyed sites used at least one dark pattern, and regulators from the EU to the U.S. are specifically targeting AI-driven manipulation.
Your client's brand still takes the hit, so anyone using AI to optimise user experience needs to audit what that AI is doing. And here's a governance question many companies haven't answered: who internally is responsible for catching dark patterns that no human designed in the first place?
Adding agentic AI into the mix
Our IP frameworks were built for humans, and now that assumption is being tested across the board. When an AI agent chooses products based on data and specs rather than brand perception, consumer confusion doctrines start to wobble. When AI systems generate or remix content, fair use and originality standards face new pressure.
Practitioners need to ask how their clients' IP portfolios perform when the decision-maker or the creator is a machine. Just as brands learned to optimise for search engines, they may soon need to optimise for AI agents: structured data, machine-readable specs, and value propositions that resonate with algorithms, not just people.
INTA London 2026
Join us from 10:00 to 11:00 on Sunday 3 May for Truth in the Digital Age: Global Survey Results on AI, Dark Patterns, and Brand Integrity. Moderated by Gowling WLG Partner Jayde Wood, the panel session features Ryan Abbott, lead counsel in DABUS litigation and author of The Reasonable Robot, Enrico Bonadio, Professor of Law at City St George's University of London, and Tiffany Shimada, a Partner at Dorsey Whitney.
Read the original article on GowlingWLG.com
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.