On 9 June 2025, the UK High Court opened proceedings in one of the most closely watched legal battles at the intersection of artificial intelligence (AI) and intellectual property (IP). The case, brought by Getty Images against Stability AI, is expected to deliver one of the first substantive rulings on the legality of using copyrighted material in AI training datasets. The outcome of this UK trial could set a critical precedent, shaping how developers build AI models and how rights holders assert control over their works.
Against this backdrop, it is particularly timely to take a closer look at the copyright implications of AI image generator software. But is it all just about copyright and AI? Not quite. In this context, we have also taken a closer look at how AI-generated content interacts with IP such as trademarks, raising new questions about brand identity, confusion, and enforcement in an increasingly automated creative landscape.
What is a trademark?
Everyone comes across trademarks every day without even thinking about it. To be 'technical', a trademark is a legally protected 'sign', owned by a company, other legal entity or an individual. It can take many forms – including words, logos, colours, shapes, or even sounds. A trademark allows the consumer to identify the products and/or services offered as belonging or originating from a particular entity. Trademarks are essential to business as they help protect the identity and reputation of their brands.
What is generative AI?
Despite generative AI has been taking the world by storm over the last couple of years, traditional AI has been around for years. Traditional AI is reliant on pre-determined rules and algorithms to carry out certain commands and tasks (e.g. Siri). So, what exactly is generative AI? Put simply, generative AI is a type of AI that is able to generate or create content, such as images and videos, based on a prompt. AI is able to create new content by using data on which it has been trained. This data is usually gathered by bots that 'scrape' data from the internet. Generative AI differs to traditional AI, as it can establish new patterns within the data set to create new content.
How do image based generative AI and trademark rights overlap?
Despite being two very different topics, they overlap quite a bit which raises a lot of legal challenges in the field of intellectual property law.
The use of trademarks in generative AI may cause problems, as when prompted, AI may create images that use these protected trade marks without the consent of the owner(s). This can be seen when looking at how different trademarked signs are incorporated into generated AI works. Issues can arise when trade marks are used to portray a brand in a way that differs from how they actually present themselves. For example, images can be generated where it looks like competing/rival companies have endorsed each other or are collaborating with one another, when in reality no such event has occurred. This can cause issues such as confusion and dilution, which is simply defined as:
- Confusion - when a trade mark is used in a way that is likely to cause confusion among consumers about the source or origin of goods or services.
- Dilution - unauthorized use of a mark that weakens its distinctiveness or tarnishes its reputation, regardless of whether it causes consumer confusion.
A couple of example images can be found below:
Prompt – "starbucks and costa". DEEPAI
Prompt – "please create an image of a mini cooper with the audi logo". Google Gemini
In the EU, text and data mining (TDM) for commercial AI training is allowed under Article 4 of the DSM (Digital Single Market) Directive, unless the rights holder has explicitly opted out. However, in the UK, TDM is (so far) only permitted for non-commercial research, meaning commercial AI training is not clearly allowed. However, these TDM exceptions do not cover the final AI output which therefore may still infringe copyright, trademark, or design rights, as discussed below.
When could trademark infringement arise through output generation?
For designers, entrepreneurs, and marketing teams using AI tools to create branding materials, a key legal risk is whether the generated output might infringe someone else's trademark. While the mere inclusion of a trademark in a prompt or the creation of a brand-like image by an AI tool does not automatically amount to infringement, legal issues can arise when the resulting logo or sign is identical or confusingly similar to an existing brand and used commercially, for example, on packaging, websites, or promotional materials. If such use misleads consumers or creates the impression of an association with the original brand, it may be legally problematic, even if the similarity was unintentional.
The fact that a logo is generated by an AI tool does therefore not mean it is automatically safe to use. Inserting well-known brand names into prompts such as "Starbucks-style logo" or "Costa Coffee colour palette" might seem like a convenient shortcut, but it can easily lead to outputs that blend distinctive elements of real brands.
This doesn't just raise issues of trademark infringement. In jurisdictions like the UK, such conduct could also amount to passing off. Passing off does not require a trademark to be registered, as it is a law that only protects unregistered trademarks in the UK, making it an additional risk for AI users and developers.
The core issue is that most AI platforms do not warn users when generated content resembles an existing trademark. So even accidental similarities can lead to legal exposure, particularly in consumer-facing sectors like coffee, fashion, or electronics, where brand identity is critical and the likelihood of confusion is high.
Who is liable for the AI-generated content?
Under current EU and UK law, legal responsibility generally lies with human users and commercial operators, not the AI system itself. If someone generates and commercially uses a logo that resembles a registered trademark, for example, on packaging or marketing materials, they may be held liable for trademark infringement, even if they acted without intent or awareness of the original mark. Most AI image tools, such as Midjourney or DALL·E, have terms of use that shift legal responsibility to the user. These platforms typically include disclaimers stating that they are not liable for infringing or harmful outputs generated through their systems. In short: if you prompt it, you own it.
However, this may not shield AI model providers entirely. Both UK and EU case law suggest that liability may extend beyond users to the developers or providers of AI systems, depending on the circumstances. Providers can be held liable as secondary infringers or even as primary infringers, particularly where they facilitate or fail to prevent infringing uses.
EU AI Act
Although, at present, there is no act in place that prohibits generative AI from being trained on data sets that include trademarked signs, the upcoming EU AI Act introduces regulations that require AI developers to provide more in-depth information on what materials are being used when training AI models. Under the Act, major AI providers will need to show they comply with copyright law, including respecting opt-outs. This does not explicitly look to solve the issues around the use of trademarked signs in generated content, but more looks to provide transparency as to what is being used to train the AI. This would result in less copyrighted works being used when training AI, as it would be easier to spot when a developer is infringing on copyrighted content.
Conclusion
Many of the above issues are being explored in the ongoing case Getty Images v. Stability AI, as mentioned in the introduction. The case raises fundamental questions about how far AI developers can go in using protected content for training purposes and whether such use violates copyright, database rights, or trademarks (where logos or branded imagery appear in the training data or outputs).
Trademark and passing-off claims have played a central role. Getty alleges that AI-generated images replicate its watermark, misleading users and harming its brand. The passing-off claim argues this misrepresents the source of the content and damages Getty's goodwill. Stability AI denies the claims, stating that any such outputs are non-commercial and user-generated.
The court's decision will be crucial in clarifying liability for AI-generated content and may shape how trademark law applies to generative AI. In the meantime, the case highlights the legal uncertainty facing developers and users, and the growing need for clear rules, particularly in commercial contexts where brand integrity is at stake.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.