Tech

EU Tightens AI Regulation With New Compliance Rules

Digital Services Act enforcement targets major tech firms

By ZenNews Editorial 8 min read
EU Tightens AI Regulation With New Compliance Rules

The European Union has moved to enforce sweeping new compliance obligations on major technology companies operating across its member states, activating key provisions of its artificial intelligence regulatory framework and strengthening enforcement mechanisms under the Digital Services Act. The measures represent the most significant regulatory intervention in AI governance the bloc has yet undertaken, affecting platforms used by hundreds of millions of people.

Regulators in Brussels have confirmed that designated "very large online platforms" — those with more than 45 million active users in the EU — now face binding transparency requirements, algorithmic accountability audits, and substantial financial penalties for non-compliance. According to officials at the European Commission, fines can reach up to six percent of a company's global annual turnover for the most serious violations, a figure that translates into billions of euros for the largest technology groups.

Key Data: The EU AI Act applies risk-based classifications to AI systems across four tiers: unacceptable risk (banned), high risk (strictly regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). Very large online platforms under the Digital Services Act must undergo independent algorithmic audits annually. Fines under the DSA can reach 6% of global turnover. The European AI Office, established recently, oversees enforcement of general-purpose AI model rules. According to Gartner, more than 80% of enterprises globally will need to adapt their AI governance structures to meet EU-equivalent standards by the middle of this decade.

What the New Compliance Framework Requires

The regulatory package builds on two distinct but interconnected instruments: the EU AI Act, which entered into force recently and is being phased in over a tiered implementation schedule, and the Digital Services Act, which has been in force for designated platforms for some time. Together, they create an interlocking set of obligations that touch virtually every aspect of how AI-powered services are designed, deployed, and explained to users.

Risk Classification and Its Consequences

At the heart of the AI Act is a risk-based classification model. AI systems that pose an "unacceptable risk" — such as real-time biometric surveillance of individuals in public spaces or social scoring systems used by governments — are prohibited outright. High-risk systems, including those used in hiring, credit assessment, education, law enforcement, and critical infrastructure, must meet stringent requirements before deployment. These include mandatory conformity assessments, registration in a public EU database, and the appointment of a qualified responsible person within the organisation.

Limited-risk AI systems, such as chatbots or deepfake-generating tools, must clearly disclose their artificial nature to users. Minimal-risk applications, such as spam filters or AI-assisted video editing, face no additional obligations beyond those already in existing law. The framework, officials said, is designed to allow innovation to continue while ensuring that higher-stakes uses of AI are subject to meaningful human oversight.

Algorithmic Transparency Under the DSA

Under the Digital Services Act, very large online platforms are required to provide users with at least one recommendation system option that is not based on profiling — meaning a user must be able to access a chronological feed or similar non-personalised content view. Platforms must also publish transparency reports detailing how their algorithmic systems work, what data they use, and what systemic risks those systems may generate. According to MIT Technology Review, regulators have pointed to the amplification of harmful content and the manipulation of public discourse as the primary systemic risks they are seeking to address.

Who Is Affected and How

The companies most directly affected by these requirements include the major American technology platforms — among them Alphabet's Google, Meta, Apple, Amazon, Microsoft, and ByteDance's TikTok — all of which have been designated as very large online platforms or very large search engines under the DSA. These companies have been required to submit risk assessment reports to the European Commission and to subject those reports to independent third-party audit.

Enforcement Actions Already Under Way

The Commission has already opened formal proceedings against several platforms under the DSA, with investigations examining recommendation algorithms, advertising transparency, and the handling of illegal content. Officials said investigations are ongoing and that enforcement action — including potential fines — could follow within months. According to Wired, at least one major platform has already made significant structural changes to its European operations in response to regulatory pressure, separating its algorithmic systems into distinct configurations for EU users.

Company / Platform DSA Designation AI Act Risk Category (Primary Use) Key Compliance Obligation Maximum Fine Exposure
Google (Alphabet) Very Large Search Engine / Platform High Risk (multiple products) Algorithmic audit, risk assessment, recommender transparency 6% of global turnover
Meta (Facebook/Instagram) Very Large Online Platform High Risk (content moderation AI) Non-profiling feed option, illegal content systems 6% of global turnover
TikTok (ByteDance) Very Large Online Platform High Risk (recommendation engine) Minors' safety audit, algorithmic transparency 6% of global turnover
Microsoft (incl. OpenAI products) Very Large Platform / GPAI provider High Risk / GPAI systemic risk General-purpose AI model disclosure, red-team testing 3% of global turnover (GPAI)
Apple Very Large Platform (App Store) Limited / High Risk (by application) Interoperability requirements, app moderation transparency 6% of global turnover
Amazon Very Large Online Platform High Risk (logistics AI, Alexa) Recommender system audit, seller fairness rules 6% of global turnover

The Role of the European AI Office

A central feature of the new enforcement architecture is the European AI Office, a body established within the Commission specifically to oversee compliance with the AI Act's provisions relating to general-purpose AI models — sometimes referred to as foundation models or large language models. These are AI systems, such as the large-scale language and image generation models produced by OpenAI, Google DeepMind, Anthropic, and Meta AI, that are trained on vast datasets and can be adapted to a broad range of tasks.

General-Purpose AI and Systemic Risk

Under the AI Act, providers of general-purpose AI models that exceed a defined threshold of computational training power — currently set at ten to the power of 25 floating-point operations, a measure of processing intensity — are classified as posing potential systemic risk. This designation triggers additional obligations, including mandatory adversarial testing (commonly known as "red-teaming"), incident reporting to the AI Office, and cybersecurity safeguards. According to IDC, the number of foundation model providers likely to fall under this classification globally is currently in the low dozens, but is expected to grow significantly as model training scales increase.

The AI Office also has the authority to conduct its own evaluations of general-purpose AI models if it believes a provider may pose systemic risks not previously disclosed. Officials said this power is intended to act as a backstop against regulatory gaps that might arise from providers underreporting their models' capabilities.

Implications for the Broader Technology Industry

For technology companies operating globally, the EU's framework is creating a de facto international compliance standard — a phenomenon analysts and regulators have described as the "Brussels Effect." Because it is often more efficient to apply a single set of rules globally rather than maintaining separate versions of a product for different jurisdictions, many companies are expected to extend their EU compliance postures to users outside Europe as well.

Compliance Costs and Structural Change

The compliance burden is not trivial. According to Gartner, large enterprises are allocating significantly increased budgets to AI governance, legal review, and technical documentation in response to the EU framework. Smaller companies and startups, which may lack dedicated compliance teams, face particular challenges, though the AI Act includes provisions for reduced obligations on small and medium-sized enterprises in certain circumstances.

The requirement to conduct conformity assessments for high-risk AI systems — detailed technical exercises that must demonstrate a system is safe, accurate, robust, and non-discriminatory before it goes to market — is being described by industry groups as one of the most resource-intensive obligations in the package. Some companies have begun restructuring their AI development pipelines to integrate compliance checkpoints at earlier stages of the build process rather than treating regulation as a post-deployment concern.

The UK's Parallel Trajectory

While the EU has opted for a comprehensive legislative approach, the United Kingdom has pursued a different regulatory philosophy since leaving the bloc — one that is more principles-based and distributed across existing sectoral regulators such as the Financial Conduct Authority, the Information Commissioner's Office, and Ofcom. However, UK regulators have been moving to tighten their own frameworks in ways that increasingly mirror the EU's direction, creating pressure on companies to manage compliance across two distinct but converging regulatory regimes.

Businesses operating in both markets have noted that while the UK approach offers more flexibility, the lack of a single binding AI law creates uncertainty. For background on how the UK's domestic AI regulatory agenda is developing, see our coverage of AI regulation rules for tech giants and how the government is tightening AI regulation ahead of EU rules. The tension between the two approaches is examined in further detail in our analysis of UK AI regulation as the EU eyes stricter rules.

Officials in London have indicated they are monitoring the EU's enforcement actions closely, particularly the outcomes of the Commission's DSA investigations, as potential indicators of what an effective enforcement model looks like in practice. Some policy analysts have suggested that if the EU's approach produces clear compliance improvements without significant harm to the innovation ecosystem, UK legislators may face renewed pressure to adopt a more codified statutory framework of their own. Related reporting on the evolving UK position is available in our coverage of the UK AI regulation framework and the government's work on AI safety rules under new regulation.

What Comes Next

The AI Act's phased implementation means that additional obligations will continue to come into force on a rolling basis. The prohibition on unacceptable-risk AI systems applies first, followed by requirements for high-risk systems in regulated sectors, with the full framework expected to be operational across all categories within the next few years. The Commission has indicated it intends to issue additional guidance, technical standards, and regulatory sandboxes — controlled testing environments where companies can develop and trial AI systems under regulatory supervision — to assist organisations in meeting their obligations.

Enforcement actions under the DSA are expected to intensify, with Commission officials indicating that the first significant fines could serve as deterrent signals to the broader industry. According to MIT Technology Review, the degree to which regulators follow through with substantial financial penalties in early cases will be decisive in determining whether the framework commands genuine industry compliance or becomes a largely procedural exercise. For a regulatory project of this scale and ambition, that question remains, for now, open.

How do you feel about this?
Z
ZenNews Editorial
Editorial

The ZenNews editorial team covers the most important events from the US, UK and around the world around the clock — independent, reliable and fact-based.

Topics: NHS Policy NHS Ukraine War Starmer League Net Zero Artificial Intelligence Zero Ukraine Mental Senate Champions Health Final Champions League Labour Renewable Energy Energy Russia Tightens Renewable UK Mental Crisis Target