An introduction to AI policy and law
15 min read
Governments worldwide are figuring out how to manage artificial intelligence. The decisions they make affect which AI tools you can use, how companies build AI systems, and who's responsible when things go wrong.
The UK offers a clear example of how AI policy has evolved. Over the past four years, the country has shifted from treating AI purely as an economic opportunity to recognising it as critical national infrastructure.
How the UK's approach began
In 2021, the UK published its National AI Strategy. The goal was straightforward. The government wanted to position the UK as a "science superpower" by moving faster and with fewer rules than the European Union.
The strategy focused on three areas. Investment in AI research and development. Skills and talent development. Light-touch governance that wouldn't slow down innovation. At this stage, AI was seen as an economic opportunity. It wasn't treated as a potential systemic risk.
The shift toward safety
By March 2023, the government's thinking had evolved. The AI Regulation White Paper proposed a "pro-innovation" framework. Instead of creating a single AI regulator, the plan gave existing regulators five principles to apply. Safety, transparency, fairness, accountability, and contestability. Regulators were to apply these principles within their sectors, supported by voluntary guidance.
Then generative AI changed the conversation. Systems like ChatGPT demonstrated capabilities that shifted public perception almost overnight. AI suddenly looked more powerful and less predictable than many had assumed.
The UK government responded by hosting the first global AI Safety Summit in November 2023. This summit produced the Bletchley Declaration. The declaration reframed AI as both an innovation opportunity and an international safety concern.
The government also created the AI Safety Institute. This body evaluates frontier AI models for risks. It was later renamed the AI Security Institute to reflect growing concerns about AI's security implications.
Policy continuity after the election
A general election in July 2024 brought a new government to power. Despite the political change, AI policy remained largely consistent.
The new government didn't adopt EU-style comprehensive AI legislation. It kept the sector-led regulatory model. But the framing shifted. AI was increasingly described as dual-use infrastructure. Economically valuable, but also carrying potential systemic risks.
AI as national capability
In January 2025, the AI Opportunities Action Plan appeared. This independent but government-backed report made clear recommendations. AI should be accelerated across the entire economy.
The plan explicitly linked AI to national capability. Productivity, defence, cyber resilience, and state capacity all depended on it. This marked a clear shift. AI was no longer just a technology sector issue. It was a matter of national security and resilience.
Two supporting documents followed. The Blueprint for a Modern Digital Government in January 2025 set out how public services should adopt AI. The AI Playbook for the UK Government in February 2025 provided practical guidance for doing so responsibly and securely.
The infrastructure question
By late 2025, policy attention had moved to AI-critical infrastructure. This includes data centres, cloud platforms, managed service providers, and the supply chains that support them.
The Cyber Security and Resilience Bill, introduced in November 2025, reflects this shift. The Bill doesn't directly regulate AI. Instead, it treats cyber security and AI risk as inseparable.
The Bill makes several significant changes.
Data centres and managed service providers are now classified as essential services. This means the infrastructure that enables AI systems now faces regulatory oversight.
Organisations must report security incidents earlier. This includes suspected attacker footholds and hidden, undetected breaches already present in systems, not just visible disruption.
The government gains new powers. It can issue directions to regulators and, in extreme cases, directly to regulated entities when national security is at risk. It can also update security requirements through secondary legislation as technology evolves.
It creates future-proofing powers. Government can update security and resilience requirements through secondary legislation as technology evolves. This allows the framework to adapt without requiring new primary legislation.
What this means for AI governance
The UK hasn't created a single AI law. Instead, it has built a layered approach.
Existing regulators apply cross-cutting principles in their sectors. The AI Security Institute evaluates frontier models. The Cyber Security and Resilience Bill gives government stronger control over AI-critical infrastructure and supply chains.
This model prioritises flexibility and speed. It treats AI as infrastructure that requires security oversight, not just innovation support.
Beyond Westminster
Scotland, Wales, and Northern Ireland have developed their own approaches within this framework.
Scotland published an Artificial Intelligence Strategy focused on trustworthy and inclusive AI. It launched a public-sector AI Register to increase transparency and track government use of AI systems.
Wales embedded AI into its broader digital strategy. The focus is on ethical use, workforce protections, and preserving the Welsh language in AI applications.
Northern Ireland lacks a formal strategy but has made progress through initiatives like the AI Castle Conversation. The region has emerged as a UK leader in AI-enabled cybersecurity and financial technology.
The current state
UK AI policy now combines principles-based regulation with infrastructure-level oversight. The government has expanded its toolkit without creating comprehensive AI-specific legislation.
The emphasis has shifted. AI is no longer treated primarily as an innovation opportunity. It's now viewed as critical infrastructure that intersects with cyber security, national resilience, and public sector reform.
This evolution reflects a broader pattern. As AI systems become more capable and more embedded in essential services, governments are treating them less like consumer products and more like critical national infrastructure.