ARTICLE
16 July 2025

EU Publishes Final Draft Of The General-Purpose AI Code Of Practice

WF
William Fry

Contributor

William Fry is a leading corporate law firm in Ireland, with over 350 legal and tax professionals and more than 500 staff. The firm's client-focused service combines technical excellence with commercial awareness and a practical, constructive approach to business issues. The firm advices leading domestic and international corporations, financial institutions and government organisations. It regularly acts on complex, multi-jurisdictional transactions and commercial disputes.
The European Commission has settled its Code of Practice for General-Purpose AI Models (Code).
European Union Technology

The European Commission has settled its Code of Practice for General-Purpose AI Models (Code).

This final text, covering Transparency, Safety and Security and Copyright, turns what was once positioned as a voluntary framework into something far closer to a baseline for proving compliance under the EU AI Act.

The Commission's Q&A confirms that developers who fall short of compliance after signing will still be considered to be acting "in good faith", and the AI Office will support, rather than penalise them. This grace period runs until 2 August 2025, after which fines may be imposed under the AI Act. This potentially sets up a two-tier system. While signatories are shielded from regulatory scrutiny for a year, even if non-compliant, non-signatories have no such protection. Those who do not sign the "voluntary" Code face immediate legal risk, which raises questions about its voluntary nature.

At the same time, the final text leaves some important questions unresolved. Crucially, the EU has not yet published the detailed guidelines and templates to determine how many of these measures work in day-to-day operations. Providers are expected to comply, but they must do so in several areas without a complete rulebook. This means the new baseline is clearer than before for rights holders, but real-world enforcement will still turn on how gaps are closed and how companies interpret their duties.

What the Code Is and Why It Matters

A general-purpose AI (GPAI) model is an advanced AI system trained on vast datasets so that it can handle a wide variety of tasks rather than just one specific function. People interact with GPAI models in daily life when they use AI chatbots for customer support, AI tools that draft emails or legal text, automated coding helpers, or online services that generate images and text on request. Practical examples of GPAI models today include OpenAI's GPT-4, Google's Gemini, Anthropic's Claude and Meta's Llama family of models, all of which can be integrated into many different products and services that people use without always seeing the underlying model directly.

The GPAI Code of Practice is the European Union's flagship voluntary tool for showing how providers of GPAI models can comply with their new legal obligations under the EU AI Act. Prepared by independent experts in a multi-stakeholder drafting process, the Code is designed to help the AI industry demonstrate compliance with key duties around transparency, safety and copyright.

The final text was published on 10 July 2025. In the coming weeks, Member States and the European Commission will assess whether the Code is fit for purpose. It will be complemented by Commission guidelines to clarify key concepts for general-purpose AI models. These additional guidelines are expected to be published later this month, but are not yet available, leaving some practical gaps for now.

The Code is split into three main chapters: Transparency, Copyright and Safety and Security. The Transparency and Copyright Chapters are relevant to all providers of general-purpose AI models as they set out how companies can comply with the new obligations under Article 53 of the AI Act. The Safety and Security Chapter only applies to a small number of developers of the most advanced AI models – those falling within the "systemic risk" category under Article 55.

Although the Code is formally voluntary, Article 56 of the AI Act clarifies that the AI Office is to encourage and facilitate such codes to ensure the proper application of the Regulation. The European Commission's Q&A explicitly confirms that signing up will be treated as strong proof of compliance. This means that adherence may be the default path for certain providers to show they meet Article 53 and Article 55 requirements until harmonised standards are adopted. Providers who opt not to sign must prove they have put in place adequate alternative measures to meet the same standards, a far from simple task given the scope of the Act.

Copyright: Sharper Obligations and an Important Blind Spot

The Copyright Chapter is perhaps the clearest example of how the Code has shifted from a voluntary promise to a quasi-regulatory framework. Earlier drafts, for example, required only "reasonable efforts" to exclude websites that routinely infringe copyright, but the final text states that providers must now actively exclude such sources.

Machine-readable rights-reservation protocols must be recognised and respected when developers use web crawlers or have them used on their behalf. This means developers can no longer ignore tools such as robots.txt files. This is a significant change for rights holders who have long argued that opt-out signals are regularly bypassed.

Providers must also implement technical safeguards to prevent their models from reproducing protected content from their training data. This lifts the bar from risk mitigation to active prevention.

A small but meaningful new obligation applies to open-source models. Providers must warn users that using the model for infringing purposes remains prohibited, even if it is freely available.

However, the final draft removes an earlier measure that would have required developers to check the provenance of protected content acquired through third-party datasets. The deletion of this check leaves a practical blind spot. Providers can now buy or license datasets without asking detailed questions about how the material was collected (although the code says that EU copyright still applies to such datasets). For rights holders, this means there is still a risk that unlawful content can enter training pipelines under the cover of third-party sourcing.

While the AI Act's Article 53 still requires providers to publish a summary of the training data, the missing provenance obligation means there is no detailed legal check on whether that dataset was collected in line with copyright law.

The final recitals confirm that signing the Code does not remove the need to comply with Union copyright law. Rights holders remain free to take action if they believe their content has been misused, even where a provider is a signatory.

Transparency: Clearer Scope, Continuous Duty

The final Transparency Chapter has transformed from general principles in earlier drafts to specific operational standards. The new Objectives section ties the entire obligation directly to the AI Act's goal of supporting trustworthy, lawful AI that respects democratic values and fundamental rights.

The recitals now outline an explicit duty to update information if it becomes obsolete due to market or technical changes. This locks in the idea that documentation must be continually updated, and one-off compliance filings will not be enough.

The longstanding uncertainty around open-source models has been resolved. The final draft mirrors Article 53(2): models released under free and open-source licences are exempt from core measures unless they pose systemic risk. This is welcome news for Europe's open-source community, which feared being unintentionally swept into rules meant for commercial providers.

The Model Documentation Form has been refined in both structure and substance. Providers must now offer more detailed explanations of how their training processes work. The word count for describing training has doubled from 200 to 400 words. Providers must now break out training and inference stages when reporting energy use and computational resources. Provenance requirements for training data have been tightened, too, with clearer distinctions about synthetic data that is not publicly accessible.

Operational timelines have been sharpened. The record-keeping clock runs from when a model is placed on the market, not when it is withdrawn. This reduces the practical retention burden for models that stay live for long periods. The final text also replaces the vague "timely manner" for responding to downstream information requests with a hard fourteen-day deadline.

These measures align closely with Article 53's disclosure requirements. However, the EU AI Office has yet to publish the detailed template for the required summary. This leaves providers in the awkward position of needing to prepare for compliance before they know exactly what the standard will look like in practice.

Safety and Security: From General Pledges to Defined Duties

The Safety and Security Chapter has evolved from general statements to clear, enforceable expectations. Providers must now adopt measures that meet at least the "state of the art" and update them throughout the model's lifecycle. They must also carry out Contextual Risk Assessments that look at the model's raw capability and how that risk may shift when combined with other software or hardware.

A key change addresses proportionality. Small and medium-sized providers will not face the same scale of reporting or documentation as large frontier model developers. This removes a structural disadvantage that could otherwise hamper smaller European research labs or early-stage companies.

The final version makes post-market monitoring more robust. Providers must put processes in place to detect risks that arise once a model is used. Independent evaluators must have proper access and be protected if they identify vulnerabilities.

Security obligations have also been clarified. Providers must now identify realistic threat actors, such as state-sponsored actors, organised crime groups or insiders, and show how their security measures address each category. Where providers deviate from listed best practices, they must demonstrate that alternative controls achieve the same level of protection.

This shift shows how the Commission expects safety and security to become part of the design process rather than an afterthought. However, full technical guidelines and best practice references have not yet been published here. Developers must therefore rely on existing standards and stay alert to further Commission guidance.

What Happens Next

The Commission has been clear in its Q&A that signing the Code will be treated as a strong sign of good-faith compliance. This means that for many general-purpose AI providers, the Code may no longer be truly voluntary in practice. Refusing to sign up leaves companies having to show regulators how they will meet or exceed the same obligations.

Yet the gap between the Code's final wording and the missing implementation details creates practical challenges. Without the official summary template for training data, finalised technical standards or clearer best practice guidelines, developers and rights holders are left to interpret broad obligations where mistakes may have costly consequences.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More