ARTICLE
15 July 2025

EU Consults On New GMP Rules For AI In Pharma Manufacturing

AP
Arnold & Porter

Contributor

Arnold & Porter is a firm of more than 1,000 lawyers, providing sophisticated litigation and transactional capabilities, renowned regulatory experience and market-leading multidisciplinary practices in the life sciences and financial services industries. Our global reach, experience and deep knowledge allow us to work across geographic, cultural, technological and ideological borders.
On 7 July 2025, the European Commission launched a public consultation that could mark the start of a turning point for how Artificial Intelligence (AI) is used in pharmaceutical manufacturing.
European Union Technology

On 7 July 2025, the European Commission launched a public consultation that could mark the start of a turning point for how Artificial Intelligence (AI) is used in pharmaceutical manufacturing. The consultation proposes significant updates to the EU Good Manufacturing Practice (GMP) guidelines—specifically Chapter 4 on Documentation, Annex 11 on Computerised Systems, and, for the first time, a new Annex 22 dedicated to AI.

The proposals reflect regulators' growing recognition that AI is rapidly becoming part of the pharmaceutical manufacturing landscape. From process optimisation to quality control, AI is expected to bring efficiency and insights that traditional systems can't always deliver. Yet the European Medicines Agency ("EMA") and the Euroean Commission are aware that these same technologies also potentially bring new risks (which may, to an extent, be unforeseeable) —especially when it comes to patient safety, product quality, and data integrity.

Under the draft Annex 22, the European Commission draws boundaries around where and how AI can be used in critical GMP applications. Notably, only static, deterministic AI models—those that produce consistent outputs for the same inputs—would be permitted in processes directly impacting product quality or patient safety. In contrast, dynamic AI models that continuously learn from new data during use are explicitly ruled out for critical GMP activities. Likewise, models with probabilistic outputs, whose predictions might vary even when given identical inputs, are also off the table for such applications. Generative AI and large language models (LLMs), too, are excluded from critical GMP functions under the draft rules.

That said, the Commission does leave the door open for these more novel AI tools to be used in non-critical GMP contexts, provided a "human-in-the-loop" approach is in place. Even then, the draft guidance stresses that qualified personnel must verify the suitability of any AI output before it can influence manufacturing decisions.

Beyond defining which types of AI are acceptable, the proposals impose significant obligations on how AI systems are validated and monitored. Annex 22 sets out expectations for thorough documentation of an AI model's intended use, covering not only the task it is meant to perform but also a deep dive into the characteristics and variability of the data it will process. Companies will be required to establish clear performance criteria for AI systems and demonstrate that these systems can meet or exceed the reliability of the processes they replace. Crucially, the test data used to validate AI systems must be kept entirely separate from the data used to train the models, to guard against bias and ensure true predictive power.

Even after deployment, companies will be expected to keep a close watch on how their AI tools perform over time. The proposals emphasise monitoring for any drift in input data or shifts in system performance that could compromise product quality or patient safety. In practical terms, this means manufacturers may need to plan for periodic re-validation of AI tools and have mechanisms in place for rapid intervention if performance drops.

The revised Chapter 4 guidance ties all of this into broader data governance requirements. It underscores that any data created, processed, or influenced by AI models must meet the same standards of integrity, accuracy and traceability as traditional GMP records. Companies remain fully responsible for the integrity of AI-driven data, regardless of whether AI systems are hosted internally or provided by external vendors. This echoes the existing principles in Annex 11, which are also being updated to reflect today's more complex technological environment.

From a compliance perspective, one of the clear messages of the consultation is that outsourcing AI capabilities does not shift regulatory responsibility away from the manufacturer. Whether AI tools come from cloud providers, software vendors, or external consultants, manufacturers will remain accountable for ensuring these systems meet GMP standards. Supplier qualification, contractual safeguards, and ongoing oversight will be more important than ever.

Next Steps for Manufacturers

Stakeholders have until 7 October 2025 to provide feedback to the European Commission.

In the meantime, companies considering AI tools in manufacturing or quality processes should:

  • Identify any current or planned AI tools used in GMP processes and assess whether they fall within the scope of the proposed Annex 22.
  • Review existing validation and data governance procedures to ensure they can accommodate AI systems and meet new documentation requirements.
  • Evaluate relationships with vendors providing AI solutions, focusing on contractual terms, validation responsibilities and audit rights.
  • Prepare for enhanced monitoring obligations around AI model performance and input data "drift."

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More