The EU Regulatory Web
The Digital Services Act (DSA) and the Artificial Intelligence Act (AI Act) are key components of the EU's constantly evolving, digital regulatory landscape. Once forming policy proposals, both regulations have now evolved into binding legislation, redefining compliance for companies operating in — or serving — the EU market. Designed to work in concert with the General Data Protection Regulation (GDPR), particularly in cases involving personal data and AI systems, these acts form together an interconnected regulatory framework. Their convergence marks a major shift in EU digital law, with the EU moving towards a unified framework for digital services, data processing, and artificial intelligence. Navigating this landscape demands expertise across all three regulatory areas and their intersections.
These intersections are explicitly acknowledged in the AI Act's Recital 118, calling for alignment between AI governance and platform regulation; Recital 136, linking AI-generated disinformation to democratic risks and reinforcing the DSA's content moderation mandate; and Recital 10, reaffirming the GDPR's primacy when it comes to personal data handling. Their interplay is also illustrated by a recent case in Germany, in which regulators classified DeepSeek's alleged GDPR violations — consisting of unlawful personal data transfers to China — as "illegal content" under Article 16 of the DSA.
This example demonstrates the need to transition from fragmented compliance with stand-alone acts to holistic governance strategies. Obligations under one regulatory framework may cascade into others, creating overlapping responsibilities and risks. Successfully navigating the regulatory environment requires precision, coordination, and strategic foresight.
Compliance Timeline and Milestones
The GDPR and DSA have been largely applicable since May 2018 and February 2024, respectively. Meanwhile, the AI Act is being phased in, with key provisions set to take effect in 2025 and 2026. The current enforcement timeline, as follows, requires immediate and strategic action from companies to ensure early and long-term compliance and, crucially, avoid investigations for noncompliance, fines, and the risk for follow-on damages.
Date | Regulation | Event/Milestone | Details/Impact |
---|---|---|---|
July 2023 | GDPR | Publication of proposal for a regulation establishing additional procedural rules relating to the enforcement of GDPR | Aims to complement the GDPR by specifying the procedural rules for the cross-border enforcement of data protection rules by supervisory authorities |
November 2024 | DSA | Adoption of the Implementing Regulation on transparency reporting under the DSA | Standardises the format, content, and reporting periods for transparency reports |
2 February 2025 | AI Act | Application of the ban of AI systems posing unacceptable risks begins | Targets applications that include social scoring, manipulative practices, predictive policing based on appearance, and certain biometric and emotion detection systems |
11 March 2025 | AI Act | Publication of the third draft of the General-Purpose AI (GPAI) Code of Practice | Forms the basis for GPAI governance |
22 May 2025 | AI Act | Public consultation on GPAI guidelines closes | GPAI guidelines define compliance requirements, thresholds for systemic risks, and obligations for providers and downstream modifiers |
4 June 2025 | AI/Cloud | Public consultation on the proposed EU Cloud and AI Development Act closes | Stakeholder input could shape future legal frameworks for cloud and AI |
15 June 2025 | DSA | Public consultation on guidelines for minors' protection under the DSA closes | May influence platform obligations and youth safety standards |
June 2025 | GDPR/DSA | Apple and Google are asked to remove AI company DeepSeek from their German app stores over unlawful data transfers to China | Berlin's data protection authority says that could set a precedent for countering an alleged data breach by labeling it as "illegal content" under the DSA |
1 July 2025 | DSA | Transparency reporting templates become mandatory | Platforms must comply with the new format and content |
2 July 2025 | DSA | The European Commission adopts a delegated act outlining rules granting access to data for qualified researchers under the DSA | Complements the DSA rules that oblige very large online platforms and search engines to grant researchers access to publicly available data on their platforms |
2 August 2025 | AI Act | GPAI obligations enter into force | GPAI providers become subject to obligations, including transparency, with specific obligations applicable to GPAI with systemic risks |
2 August 2026 | AI Act | Majority of AI Act provisions enter into force | Includes key obligations for high-risk AI systems |
August 2027 (est.) | AI Act | Extended deadline for some high-risk AI requirements | Certain high-risk categories have longer compliance timelines |
DSA, AI Act, and GDPR: Convergence Points
- Fair and Innovative Digital Ecosystem: Both the DSA and AI Act seek to enhance innovation and competitiveness in the EU digital landscape while ensuring fairness across the internal market. The DSA promotes a level playing field for digital service providers by supporting the growth of smaller platforms and startups. Similarly, the AI Act fosters responsible innovation by establishing clear rules for the development and deployment of trustworthy AI, helping businesses — particularly emerging players — navigate regulatory requirements while maintaining high standards of safety, transparency, and accountability. The GDPR complements these aims by protecting individuals' personal data and fostering trust in digital services. Both the DSA and AI Act defer to the GDPR for the safeguarding of the fundamental right to the protection of personal data.
- Content Moderation and AI — Balancing Speed and Fairness: Online platforms increasingly rely on AI to detect and remove illegal or harmful content. Under the DSA, operators must act quickly, provide clear reasons for takedowns, offer complaint mechanisms, and assess moderation impacts on free expression. For its part, the AI Act requires safeguards against bias, discrimination, and lack of transparency, ensuring AI systems are ethical, transparent, and accountable. And given that content moderation often involves processing of personal data, the principles set out in the GDPR's Article 5 (e.g., data minimisation, purpose limitation, and accuracy) and Article 6 (on lawfulness of processing) also apply.
- Risk Management and Systemic Risks: The DSA directly addresses systemic risks, a critical consideration when AI is involved. Platforms must ensure their software design — including any embedded AI — along with their systems and services, does not introduce systemic risks. Articles 33(6) and 34 of the DSA outline this obligation. Similarly, the AI Act is designed to minimise risks stemming from AI systems; some AI systems are categorically prohibited due to their unacceptable risks, while high-risk AI systems face stringent regulation. GDPR Article 35 also mandates that organisations must assess risks to individuals' rights when processing is likely to result in high risk, such as through AI systems, to ensure that privacy risks are managed alongside other systemic risks.
- Transparency and Algorithmic Accountability: The DSA focuses on user-facing clarity; platforms must explain how content is ranked and identify the main parameters used in recommender systems (i.e., how ads are targeting audiences). Platforms must disclose when AI is used to personalise content or advertisements, explain how these systems work, and allow users to control their data and preferences. The AI Act focuses on the system itself; deployers must label AI-generated content such as deepfakes while providers must ensure that outputs are interpretable and supported by clear usage instructions. It also extends through requirements for human oversight of high-risk AI systems and explanation rights for AI-assisted decisions. This aligns with broader AI transparency requirements, ensuring users know and can manage AI-driven interactions. The GDPR protects fundamental rights regarding solely automated decision-making and gives users rights such as access, rectification, erasure, restriction, and objection, and it regulates automated decision-making under Article 22(3), requiring meaningful human intervention.
- Fundamental Rights: All three pieces of legislation purport to ground their frameworks in the safeguarding of fundamental rights. The DSA requires proportionate enforcement and risk assessments that consider impacts on rights. The AI Act classifies systems that affect rights as high-risk and requires deployers — especially in sensitive contexts such as employment or public services — to conduct Fundamental Rights Impact Assessments. Furthermore, the GDPR enshrines rights related to privacy and data protection, such as, inter alia, the right to data access, erasure, and protection from automated decisions.
- Prohibited Practices: The DSA bans so-called "dark patterns" (i.e., interface designs that mislead or manipulate users). The AI Act prohibits systems that use manipulative techniques or exploit vulnerable users in ways likely to cause harm. Platforms must avoid using AI to create urgency cues, exploit user data to personalise pressure tactics, or obscure opt-out choices. Such practices can violate both acts and lead to enforcement action. Another example would be the use of AI-driven profiling to target users with highly personalized political content that exploits their psychological traits or emotional vulnerabilities. Such practices would likely breach both the DSA and the AI Act. The GDPR, DSA, and AI Act each offers protection for profiling. The GDPR provides data subjects with fundamental rights regarding automated decision-making based on profiling, while the DSA restricts, or prohibits in certain circumstances, the use of advertising based on profiling and requires non-profiling options for recommender systems. The AI Act prohibits certain profiling-based AI systems. Similarly, the GDPR requires explicit consent for special category data and emphasises fairness and transparency, prohibiting processing that is deceptive or manipulative.
Novel Legal and Compliance Challenges
- Collision With Fundamental Rights: The DSA and AI Act place significant obligations on platforms to moderate harmful and illegal content. Compliance may create tension with fundamental rights — especially the freedom of expression and the right to conduct business. Platforms face pressure to use AI tools to meet moderation requirements efficiently, yet these systems often lack the nuance to distinguish between unlawful content and controversial, but legal, speech. The result is a risk of over-removal and potential censorship, particularly of minority or dissenting voices, driven by fear of severe fines under the DSA. Such a "remove first, explain later" approach may have chilling effects on speech and/or limit diversity of opinion online. GDPR adds to the complexity as content moderation often involves processing personal data, including sensitive data tied to political opinions or beliefs. Efforts to moderate content must balance the DSA's obligations with the GDPR's requirements for lawful processing, data minimisation, and the protection of freedom of expression and information under Article 85 of the GDPR, which obliges Member States to reconcile data protection with freedom of speech.
- User Explanations and Technical Documentation: Platforms subject to the DSA, AI Act, and GDPR face a key challenge: reconciling the requirements of the DSA and GDPR to provide clear, user-friendly explanations with the AI Act's demand for complex technical documentation. Under the DSA, platforms must offer users "statements of reasons" for content moderation actions and explain how algorithmic systems shape content recommendations, including opt-out options. Under the GDPR, transparency information about processing must be provided to data subjects in a concise, intelligible, and accessible form, "using clear and plain language." However, advanced AI systems often times operate opaquely, making it difficult to translate their decision-making into understandable explanations. In parallel, the AI Act requires high-risk AI providers to maintain detailed technical documentation, audit trails, and justifications of model behaviour for compliance assessments, while even limited-risk systems must ensure transparency (e.g., AI-generated content disclosures). This dual transparency obligation creates tension between legal compliance and practical communication, underscoring the need for cross-functional coordination to meet both regulatory and user expectations. The GDPR imposes additional transparency requirements, obliging controllers to explain how personal data is processed, including in automated decision-making contexts. There is a significant challenge in simplifying complex AI explanations for users while fulfilling the GDPR's requirements for intelligibility. Failure to align explanations across the DSA, AI Act, and GDPR risks inconsistent disclosures and legal uncertainty.
- Overlapping Complaints and Complex Liability Questions: Content moderation complaints can expose deeper systemic issues under the DSA and AI Act. For instance, if a user challenges the removal of lawful content, such as political speech, the platform may face scrutiny under the DSA for failing to provide a proper explanation or redress mechanism. However, if the error stems from a flawed AI system — due to poor training, lack of robustness, or discriminatory outcomes — this may also trigger a breach of the AI Act's requirements for high-risk systems, such as ensuring accuracy, nondiscrimination, and auditability. Platforms must adopt a proactive, system-based compliance approach, ensuring that the design and deployment of AI tools meet AI Act standards to avoid cascading breaches under the DSA. Complaints may simultaneously implicate GDPR rights. For example, a user could assert their right to rectification or erasure (GDPR articles 16 and 17) if moderation decisions are based on inaccurate personal data. Automated moderation decisions may also engage GDPR Article 22 rights against solely automated decision-making with significant effects. Platforms may thus face parallel complaints under the applicable acts.
- Liability Allocation and Internal Governance: Platforms developing and deploying their own AI moderation tools may be classified as both "providers" and "deployers" under the AI Act, each carrying distinct compliance obligations. Platforms must align legal, technical, and content teams to ensure AI systems meet both DSA and AI Act requirements. As platforms face increased accountability, a transparent, well-documented governance model across the AI life cycle is critical — not only for compliance but to mitigate legal risk and uphold users' rights. The GDPR imposes parallel accountability obligations under articles 5(2) and 24, requiring controllers to demonstrate compliance and maintain records of processing (Article 30). Internal governance must integrate data protection by design and default (Article 25), ensuring AI systems adhere to GDPR principles. This creates an additional layer of documentation and governance obligations that must align with DSA and AI Act requirements.
- Ensuring Regulatory Coordination and Avoiding Double Punishment: There is already some acknowledgement that coordinated enforcement will be needed across these laws; the AI Act designates data protection authorities as market surveillance authorities (MSAs) for certain high-risk AI systems (and the European Data Protection Supervisor is the designated MSA for EU institutions), while the DSA requires coordination with data protection authorities when platforms process personal data. However, without a unified "one-stop shop" compliance mechanism similar to that found in the GDPR, platforms may suffer from over- or under-compliance, higher costs, and increased exposure to penalties for the same underlying conduct — contrary to the ne bis in idem principle. At the same time, the lack of clear coordination between the enforcers may erode legal certainty, create enforcement gaps, and lead to regulatory arbitrage.
Integrating Risk Management: Principles for a Unified Compliance Strategy
To successfully navigate the integrated compliance landscape, platforms will need:
- Early design of compliance strategy and deployment.
- Robust, detailed documentation to demonstrate compliance. That technical documentation should satisfy requirements across all three pieces of legislation, particularly for AI systems that process personal data and operate on digital platforms.
- Specialised software for managing compliance, conducting risk assessments, and maintaining documentation.
- Comprehensive risk assessment procedures that address GDPR Data Protection Impact Assessments, DSA systemic risk assessments, and AI Act Fundamental Rights Impact Assessments simultaneously.
- Continuous monitoring of new guidelines, interpretations, and amendments related to the DSA, AI Act, and GDPR to ensure ongoing compliance.
- Engagement with external legal, technical, and compliance experts to overcome challenges.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.