Tech

UK Proposes Stricter AI Safety Rules for Tech Giants

New legislation aims to increase oversight of high-risk systems

By ZenNews Editorial 8 min read
UK Proposes Stricter AI Safety Rules for Tech Giants

The UK government has proposed sweeping new legislation that would impose legally binding safety obligations on companies developing and deploying high-risk artificial intelligence systems, marking one of the most significant regulatory interventions in the technology sector in recent memory. The proposals, which target large-scale AI models capable of causing widespread societal harm, have drawn both cautious praise from safety advocates and sharp resistance from major technology firms operating in the country.

The draft framework would require developers of powerful AI systems — broadly defined as models trained on vast quantities of data and capable of performing tasks across multiple domains without specific programming — to register their systems with a new national oversight body, conduct mandatory safety evaluations before deployment, and report incidents where their technology causes or contributes to serious harm. Officials said the rules are intended to close the gap between the pace of AI development and the existing patchwork of voluntary commitments that currently govern the industry.

Key Data: According to analysis from Gartner, more than 70% of enterprises globally are expected to be running AI-assisted operations within the next two years, up from under 15% just four years ago. IDC projects global AI spending will exceed $300 billion annually by the middle of this decade. A separate report from MIT Technology Review found that fewer than one in five AI incidents affecting consumers in the UK were publicly disclosed by the companies responsible, underscoring the transparency gap the new legislation aims to address.

What the Proposals Actually Say

At the heart of the government's proposals is a tiered classification system that sorts AI applications by the level of risk they pose to individuals and society. Systems deemed to carry the highest risk — including those used in healthcare diagnostics, criminal justice, critical national infrastructure, and large-scale content generation — would face the most stringent requirements. Lower-risk systems, such as spam filters or recommendation engines, would continue to operate under lighter-touch voluntary codes.

The Definition of "High-Risk" AI

One of the most contested elements of the proposals is how high-risk AI is defined. Officials said the government intends to draw the boundary around systems that operate autonomously, make or substantially influence decisions affecting individuals' rights or access to services, or are capable of generating content at a scale that could meaningfully influence public discourse. Critics from industry groups argue this definition is broad enough to capture a wide range of commercial products, potentially creating compliance burdens that disadvantage UK-based companies relative to overseas competitors operating outside the country's jurisdiction.

Mandatory Testing and Red-Teaming Requirements

Under the proposed rules, companies would be required to carry out what regulators describe as "pre-deployment evaluations," a process in which AI systems are tested for harmful outputs, bias, and failure modes before being made available to the public. This includes adversarial testing — sometimes called red-teaming — in which specialists attempt to manipulate systems into producing dangerous or misleading content. According to Wired, several of the largest AI laboratories currently conduct voluntary versions of these evaluations, but the government's proposals would make them a legal prerequisite, with independent auditors empowered to verify results.

The Companies in Scope

The legislation is widely understood to target the global technology giants that develop the most capable AI systems, including companies headquartered in the United States that operate at scale within the UK market. However, officials said the obligations would apply to any company offering high-risk AI services to UK users or businesses, regardless of where the company is based — a provision likely to generate legal challenges under international trade frameworks.

Company / Platform Key AI Products Potential Regulatory Exposure Current Compliance Position
OpenAI GPT-4, ChatGPT, Sora High — large-scale content generation, enterprise API Voluntary safety commitments; no UK-specific legal obligations currently
Google DeepMind Gemini, AlphaFold, Veo High — healthcare, research, consumer products Participates in voluntary AI Safety Summit framework
Meta Llama models, AI assistants Medium-High — open-weight models complicate oversight Published model cards; resists mandatory pre-deployment review
Microsoft Copilot, Azure AI, Bing AI High — enterprise-scale deployment across regulated sectors Internal responsible AI standards; EU AI Act alignment in progress
Amazon Web Services Bedrock, Alexa AI, Titan models Medium — infrastructure provider with downstream liability questions Shared responsibility model; limited public safety disclosures
Anthropic Claude model family High — frontier model developer, UK investment recipient Constitutional AI approach; has engaged with UK AI Safety Institute

Open-Source Models Present a Distinct Challenge

Regulators acknowledge that AI models released under open licences — meaning anyone can download, modify, and deploy them without the original developer's involvement — present a fundamentally different enforcement problem. When a company releases model weights publicly, as Meta has done with its Llama series, the developer loses meaningful control over how the system is used. Officials said the government is consulting on whether liability in such cases should attach to the original developer, the entity that deploys the model in a specific context, or both. MIT Technology Review has noted that this question is unresolved in every major regulatory framework currently under development globally.

Oversight Architecture and Enforcement Powers

The proposals envisage a central AI Authority — likely an expansion of the existing AI Safety Institute established following the government's international AI Safety Summit — that would coordinate oversight across existing sector regulators. The financial services regulator, the health and social care regulator, and Ofcom, which oversees broadcasting and online platforms, would each retain jurisdiction over AI applications in their respective domains, with the central authority setting baseline standards applicable across all sectors.

Sanctions and Enforcement Thresholds

Officials said the proposed penalties for non-compliance are structured to reflect the scale of the companies involved, with maximum fines set as a percentage of global annual turnover rather than fixed monetary amounts — a model drawn directly from the General Data Protection Regulation, the EU's landmark data protection law. Companies that repeatedly fail to comply or that conceal safety incidents from regulators could face temporary suspension of their AI services in the UK market. For context, similar provisions under the EU AI Act, which applies across member states, have already prompted several technology firms to accelerate their internal compliance programmes, according to Gartner.

Industry Response and Political Pressure

Technology companies and their trade associations have responded to the proposals with a mixture of cautious engagement and pointed criticism. Several firms argue that prescriptive pre-deployment testing requirements are premature given that scientific consensus on how to measure AI risk is still developing. Others have raised concerns that mandatory incident reporting could create legal liability that discourages voluntary transparency, producing precisely the opacity that regulators say they are trying to eliminate.

For context on how earlier digital legislation navigated similar industry opposition, the ongoing tensions over platform accountability are explored in our coverage of how UK delays to the Online Safety Bill unfolded as tech giants challenged the rules — a precedent that many observers say offers a cautionary lesson about the gap between legislative ambition and practical enforcement.

Business groups have also pointed to the risk of regulatory divergence, noting that the UK is simultaneously trying to position itself as an AI investment destination following its departure from the European Union's single market. If UK rules are perceived as more burdensome than those in competing jurisdictions, officials risk pushing AI development and talent to the United States or continental Europe.

How the UK Framework Compares to International Approaches

The UK's proposed framework sits between two poles of global AI regulation. The EU's AI Act, which has entered a phased implementation period, takes a highly prescriptive product-safety approach — classifying specific use cases as prohibited, high-risk, or minimal-risk and imposing corresponding requirements regardless of which company develops the underlying technology. The United States, by contrast, has relied primarily on executive orders, voluntary commitments from major AI developers, and sector-specific guidance, stopping well short of comprehensive binding legislation.

The UK government has explicitly stated it does not intend to replicate the EU's approach in full, preferring what officials describe as a "pro-innovation" model that empowers existing regulators rather than creating a new centralised bureaucracy. Whether that approach will be considered credible by safety researchers and international partners remains an open question. Wired has reported that several leading AI safety academics believe the current UK proposals do not go far enough to address the risks posed by the most capable frontier models — systems trained at enormous computational scale that may exhibit emergent behaviours their developers did not anticipate or fully understand.

Readers following the development of these proposals may also wish to consult our earlier analysis of how the UK unveiled a stricter AI safety framework for tech giants, as well as the subsequent reporting on how the UK moved to tighten AI regulation rules for tech giants in the months prior to the current legislative push.

What Comes Next

The proposals are currently in a formal consultation phase, during which companies, civil society organisations, academic institutions, and members of the public can submit evidence and arguments to inform the final shape of the legislation. Officials said a response to the consultation, along with a revised draft bill, is expected before the end of the current parliamentary session.

Parallel to the domestic process, the UK is engaged in multilateral discussions on AI governance at the G7, the OECD, and through the AI Safety Institute's international partnership network, which includes counterpart bodies in the United States, Japan, Canada, and Australia. Whether these international conversations will produce the kind of coordinated standards that would reduce compliance fragmentation for globally operating companies remains to be seen.

For ongoing coverage of how this legislation evolves through parliament, including reactions from the AI Safety Institute and independent technical experts, see our continuing series on UK moves to unveil stricter AI safety rules for tech giants and the broader regulatory picture examined in our report on the UK's tougher AI safety rules for tech giants.

The outcome of this legislative process will likely determine whether the United Kingdom emerges as a credible third pole in global AI governance — distinct from the US model of industry self-regulation and the EU's rights-based legal architecture — or whether the gap between stated ambition and enforceable rules once again frustrates those calling for meaningful accountability in one of the most consequential technology sectors of the current era.

How do you feel about this?
Z
ZenNews Editorial
Editorial

The ZenNews editorial team covers the most important events from the US, UK and around the world around the clock — independent, reliable and fact-based.

Topics: NHS Policy NHS Ukraine War Starmer League Net Zero Artificial Intelligence Zero Ukraine Mental Senate Champions Health Final Champions League Labour Renewable Energy Energy Russia Tightens Renewable UK Mental Crisis Target