ARTICLE
24 February 2026

Europe's AI Rulebook Is Taking Shape, But The Technology Is Not Waiting

K
Kinstellar

Contributor

Kinstellar acts as trusted legal counsel to leading investors across Emerging Europe and Central Asia. With offices in 11 jurisdictions and over 350 local and international lawyers, we deliver consistent, joined-up legal advice and assistance across diverse regional markets – together with the know-how and experience to champion your interests while minimising exposure to risk.
Artificial intelligence is entering a decisive phase in Europe. Despite the EU's ambitious AI Act framework, key guidance remains pending while enforcement scrutiny intensifies and AI adoption accelerates.
European Union Technology
Andrada Popescu’s articles from Kinstellar are most popular:
  • within Technology topic(s)
  • with Senior Company Executives, HR and Finance and Tax Executives
  • in United Kingdom
  • with readers working within the Technology, Property and Telecomms industries

February 2026 – Artificial intelligence is entering a decisive phase in Europe. Despite the EU's ambitious AI Act framework, key guidance remains pending while enforcement scrutiny intensifies and AI adoption accelerates.

1. EU AI Act: a strict framework struggling to keep pace

While Europe's ambition to regulate AI is unquestioned, recent developments signal that the legal framework is struggling to keep pace with both the rapid evolution of the technology and its own aspirations.

1.1. Missed Guidance and Growing Uncertainty

The European Commission missed a key deadline on 2 February, when it failed to publish a comprehensive list of use cases to help businesses distinguish between high-risk and non-high-risk AI systems.

While this delay is largely driven by the broader Digital Omnibus proposal and ongoing efforts to streamline overlapping digital legislation, its consequences are immediate. Without this guidance, developers, deployers, and national supervisory authorities lack the clarity needed to consistently classify AI systems under the AI Act's risk-based framework.

This uncertainty is particularly problematic, as key compliance obligations and timelines under the current version of the AI Act are rapidly approaching. As a result, companies are forced to move forward with compliance preparations in the absence of clear regulatory direction, increasing both legal uncertainty and operational complexity.

1.2. Pushback on the Digital Omnibus Proposal

The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued a Joint Opinion on 10 February delivering an unusually direct critique of the Digital Omnibus Proposal, with implications extending beyond data protection into AI governance:

  • They warn that proposed amendments to the definition of personal data could significantly narrow data‑protection rights, create legal uncertainty, and undermine GDPR protections.
  • They strongly oppose giving the European Commission power to decide, through implementing acts, which pseudonymised data is "no longer personal", arguing that this would weaken core data protection guarantees.
  • While acknowledging the value of streamlining compliance (e.g., through higher thresholds for breach reporting), they emphasise that simplification must not come at the expense of fundamental rights.

The picture is clear: across the EU, regulators, supervisory authorities, and industry stakeholders are coming to the same conclusion—while the AI Act establishes an exceptionally strict compliance framework, the ecosystem required to support compliance is not ready.

2. Regulatory scrutiny intensifies around AI-generated harmful content

Governments across Europe, Asia, and other regions have launched coordinated investigations and sanctions targeting a widely used image‑generating AI assistant integrated into major social platform. The tool has been linked to the production of non‑consensual and sexualised deepfakes, including imagery involving minors, triggering unprecedented legal and regulatory action.

Several Asian governments, including Indonesia and Malaysia, have imposed temporary bans, citing severe child‑protection and human‑rights concerns linked to the spread of manipulated content.

At the same time, EU regulators have opened formal proceedings to assess whether the platform sufficiently anticipated and mitigated the risk of illegal image generation prior to deployment. In parallel, French prosecutors conducted a search of the platform's Paris office and summoned senior leadership as part of a criminal investigation involving manipulated images of minors.

This case reflects a broader shift in regulatory focus. European legal standards increasingly impose duties before deployment, requiring companies to identify foreseeable risks and embed safeguards from the design stage. As a result, regulatory scrutiny is now focused less on user misconduct and more on failures in system design, risk assessment, and governance.

3. AI is entering its next operational era

While regulators continue to refine the legal framework, AI itself is rapidly evolving beyond its original role as a passive support tool.

A new generation of agentic systems is emerging, capable of executing complex operational tasks such as fraud detection, supply chain optimisation, and autonomous decision-making, often with minimal human intervention. These systems are no longer simply assisting human users; they are increasingly acting as operational actors within business environments.

As a result, enterprises are beginning to integrate AI as a functional component of their workforce. AI systems are used to generate analyses, tailor communications, automate workflows, and enable small teams to operate at significantly larger scale and efficiency.

What does it mean for companies? While the operational and productivity benefits are substantial, this shift fundamentally changes the risk landscape. Organisations must implement robust governance structures, including clear identity and access controls, strict data access limitations, and adaptive security architectures capable of supporting safe and accountable AI deployment.

Conclusion

All these developments confirm that 2026 is shaping up to be a defining year for AI governance. Even in the absence of complete regulatory guidance, companies are rapidly expanding their use of AI across both routine and mission-critical processes.

Companies now have to balance two competing realities: on the one hand, they must closely monitor evolving EU regulatory developments and prepare for shifting timelines and interpretative guidance; on the other hand, they cannot afford to delay implementation of internal governance measures.

Once fully applicable, the AI Act will impose extensive and operationally complex obligations. Companies that begin building internal AI governance frameworks now will be significantly better positioned to ensure compliance, mitigate regulatory risk, and safely leverage AI's full operational potential.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More