The UK’s AI Journey (2021–2025)

Estimated time.


Since publishing the National AI Strategy in 2021, the UK government has introduced a series of key policies to shape its approach to artificial intelligence.

In March 2023, it released the AI Regulation White Paper, setting out five core regulatory principles—safety, transparency, fairness, accountability, and contestability—with voluntary guidance issued to sector regulators. Later that year, in November 2023, the UK hosted the first global AI Safety Summit, resulting in the Bletchley Declaration, highlighting the need for international cooperation on AI risks. This was then followed by the creation of the AI Safety Institute, tasked with evaluating frontier AI models.

More recently, the independent, but government backed, AI Opportunities Action Plan (January 2025) provided detailed recommendations to accelerate AI adoption across the economy. Complementing this, the Blueprint for a Modern Digital Government (January 2025) and the Artificial Intelligence Playbook (February 2025) offered practical guidance for integrating AI into public services responsibly, securely, and transparently.

The devolved governments of Scotland, Wales, and Northern Ireland have also shaped AI policy in line with local priorities. Scotland, through its dedicated Artificial Intelligence Strategy, has prioritised trustworthy and inclusive AI, introducing a public-sector AI Register. Wales has embedded AI into its digital strategy, with an emphasis on ethical use, workforce protections, and support for the Welsh language. Northern Ireland, though without a formal strategy, is progressing through initiatives like the AI Castle Conversation, and has emerged as a UK leader in AI-enabled cybersecurity and FinTech.

This article traces the UK’s evolving AI policy landscape from 2021 through to April 2025.

 

The Launch of the National AI Strategy (2021)

In September 2021, the UK released its first National AI Strategy, marking the start of a “step-change” in its AI ambitions.

The strategy set out a decade-long vision for AI to drive economic growth, improve public services, and strengthen the UK’s status as a global science and tech power. It built on earlier foundations – notably the 2017 Industrial Strategy and a 2018 £1 billion AI Sector Deal – and responded to expert guidance from the government’s AI Council, an advisory body of academics and industry leaders formed in 2019. In fact, the AI Council’s January 2021 AI Roadmap had explicitly urged the government to develop a national strategy, laying out recommendations that helped shape the final plan.

Key pillars of the National AI Strategy included boosting AI research and development, supporting job creation and skills, and spreading AI benefits across all sectors and regions of the UK. Notably, it envisioned a “world-leading” approach to AI governance – one that would be pro-innovation and agile while also ensuring public trust and safety. But rather than immediately imposing new strict rules, the strategy spoke of enabling “a progressive regulatory and business environment” to let AI flourish responsibly. This set the tone for the UK’s distinct regulatory philosophy in the coming years. Early on, the strategy also emphasised inclusivity and ethics, aligning with values echoed in the devolved nations – for instance, Scotland’s own AI Strategy (launched March 2021) which prioritised trustworthy, ethical and inclusive AI development.

To kick-start implementation, the government created an AI Action Plan in July 2022 to track progress on the strategy’s commitments. By this time, initial steps were underway. New funding programs for AI startups and academia were announced, and an AI Standards Hub was launched in October 2022 (led by the Alan Turing Institute in partnership with the British Standards Institution and National Physical Laboratory) to shape global AI technical standards. The government also piloted an Algorithmic Transparency Standard, one of the first of its kind globally, to guide public sector organisations in publishing how they use algorithmic or AI tools. This transparency initiative – developed with the Centre for Data Ethics and Innovation – was piloted across several departments in 2022 as part of ensuring accountable use of AI in the public sector.

At the same time, the broader regulatory landscape was shifting. The EU was busy drafting its AI Act (a law taking a more prescriptive, risk-tiered approach), but post-Brexit Britain was charting its own course. By the end of 2022, it was clear the UK would pursue a different, more flexible model for AI governance, foreshadowed by policy papers emphasising a “pro-innovation” stance.

 

A Government Reorganised for Tech and AI

Early 2023 brought institutional change at the heart of Whitehall. In February, Prime Minister Rishi Sunak’s administration created a dedicated Department for Science, Innovation and Technology (DSIT), elevating science and tech policy to the Cabinet level for the first time. DSIT was formed on 7 February 2023 by carving out the tech, digital and science portfolios from existing departments. This reorganisation answered long-standing calls to give innovation a “top seat at the Cabinet table” and was warmly welcomed by industry. The new department brought under one roof the “five technologies of tomorrow” – AI, quantum computing, engineering biology, semiconductors, and future telecoms – along with responsibility for life sciences and green tech. By concentrating these areas, the government signalled an intent to drive a more coherent tech strategy. Michelle Donelan, the first Science, Innovation and Technology Secretary, described DSIT’s mission as ensuring the UK becomes “the most innovative economy in the world and a science and technology superpower”.

For AI policy, the creation of DSIT meant that the small Office for AI (previously a joint unit under the business and digital ministries) was now housed within a powerful central department. This structural change was more than bureaucratic – it indicated the UK’s commitment to give AI leadership higher priority and visibility. DSIT immediately got to work on a major policy package: a new Pro-Innovation AI Regulation White Paper. Published in March 2023, this white paper – “AI Regulation: a pro-innovation approach” – set out the UK’s plan for governing AI in a markedly different way from more rigid regimes taking shape elsewhere.

 

Embracing a “Pro-Innovation” Regulatory Model

At the core of the 2023 White Paper was the UK’s decision not to create a single omnibus AI law or a new AI regulator, in contrast to the EU’s sweeping AI Act. Instead, the government proposed an agile, principles-based framework leveraging existing regulators. Under this model, sectoral regulators – like the Competition and Markets Authority (CMA), Information Commissioner’s Office (ICO), Financial Conduct Authority (FCA), Ofcom, and others – would be tasked to oversee AI within their domains, guided by a set of five cross-cutting principles to ensure consistency.

These cross-sector principles, intended to be embedded in regulators’ guidance and rules, were: (1) Safety, security and robustness; (2) Appropriate transparency and explainability; (3) Fairness; (4) Accountability and governance; and (5) Contestability and redress. In essence, any organisation deploying AI in the UK should strive to make its systems secure and reliable, explainable to a degree appropriate for the context, non-discriminatory, subject to proper oversight, and open to challenge or remedy if things go wrong.

Crucially, the UK’s approach was non-statutory (at least initially) – no new AI Act was proposed. Regulators would use existing powers under laws for data protection, competition, consumer protection, etc., to apply these principles to AI. The government argued this was the best way to avoid stifling innovation in a fast-moving field. By choosing not to introduce new prescriptive legislation at this stage, the UK diverged from the EU’s path. (The EU’s AI Act, by contrast, is a binding law with detailed rules and obligations by risk category, from bans on certain “unacceptable” AI uses to strict requirements for “high-risk” systems. The UK judged that an overly rigid approach would risk undermining its AI sector’s agility and competitiveness.) Instead, Britain opted for a lighter-touch regime that could adapt as AI technology evolved – an approach often likened to its stance on fintech regulation, which favoured sandbox experimentation and principles over hard-and-fast rules.

How the UK model works

Rather than having, say, an “AI compliance act” applicable to all AI systems, a medical AI diagnostic tool in the UK would continue to be overseen by health regulators and data laws, a trading algorithm by financial regulators, an AI-enabled recruitment system by equality and data protection law, and so forth – but all informed by the common principles of safety, transparency, fairness, accountability and contestability. DSIT would issue guidance to regulators on implementing the principles, and a central monitoring function would observe how the framework operates.

The White Paper emphasised that this flexible approach was “proportionate, future-proof and pro-innovation”, aiming to both support innovation and address risks. It was also touted as collaborative – working with industry and regulators – and iterative, meaning the government didn’t rule out future changes (even legislation) if needed. Indeed, the paper included measures to monitor AI developments and “future-proof” the framework by remaining open to new risks.

The “pro-innovation” philosophy also meant the UK placed a premium on not duplicating rules. For example, rather than new AI-specific safety certification, the existing product safety regimes and standards would cover AI; instead of new AI bias laws, the Equality Act and ICO guidance on AI and data bias would be used. This pragmatic reuse of current laws was meant to avoid burdening developers with parallel regulations. The government did, however, acknowledge gaps – such as the need for better guidance and expertise. So it promised to resource regulators with new funding to build AI capability and to develop tools like sandbox environments and codes of practice. (By late 2023, DSIT had indeed announced £100 million to support AI regulation and skills for regulators, including creating an AI research hub in healthcare regulation.)

Comparison with other regimes

The UK often contrasted its agile approach with the EU’s heavier rulebook and even the U.S.’s approach, which, while also pro-innovation, was beginning to introduce some binding requirements (such as the U.S. AI Bill of Rights blueprint and specific state laws). Britain positioned itself as pursuing a middle path: not “AI lawlessness,” but a light-touch, adaptive rulebook. One immediate difference was that the UK chose guidance over statute – the EU’s AI Act (finalised in 2024) imposes new legal obligations across all member states, whereas the UK’s 2023 framework was essentially policy guidance to regulators.

Another difference was scope and focus. The EU Act applies broadly (to providers and users of AI with tiers of risk); the UK framework relies on sector regulators, so enforcement is spread out and context-specific. Also, the EU was creating new oversight bodies (like a European AI Board), whereas the UK explicitly decided not to set up a new AI regulator, instead empowering its existing institutions. This patchwork could be seen as more flexible – or as lacking teeth, depending on one’s view. Throughout 2023, UK officials defended their choice, arguing it would spur innovation by not over-regulating early, while still addressing the most important risks in context.

The White Paper went through consultation in spring 2023, garnering feedback from industry, academia and civil society. By late 2023, ministers indicated they would largely stick to this model, but with some refinements. In February 2024 the government published the consultation outcome, where they confirmed the principles-based approach and announced additional measures. For example, they decided to mandate transparency for public sector use of AI (we’ll see more on this later) and hinted at possible future statutory powers to ensure compliance for the most advanced AI systems if voluntary approaches failed. Notably, regulators were asked to publish their own AI implementation plans by April 2024, demonstrating how they would uphold the principles in their sectors. This required coordination was aided by a special body: the Digital Regulation Cooperation Forum.

 

AI and the Law

While the UK has opted for a regulatory approach that avoids comprehensive AI legislation in favour of sector-specific enforcement, 2025 has brought mounting pressure for a more formal legal framework. This pressure has come from both Parliamentary initiatives and wider societal debate — particularly around copyright and data use in AI model training.

In March 2025, a Private Member’s Bill titled the Artificial Intelligence (Regulation) Bill was introduced in the House of Lords. The Bill proposes the creation a statutory framework for AI governance, including the establishment of an independent AI Authority to oversee compliance, enforce standards, and coordinate regulation across sectors. The proposal reflects ongoing calls — particularly from consumer groups and some legal experts — for a more centralised model of AI oversight. However, it is not supported by any government department, and its legislative future remains uncertain. The current administration has reiterated its preference for a pro-innovation, regulator-led model rather than binding statutory obligations.

The same period has seen intensifying debate around copyright law and AI training data. In December 2024, DSIT and the Information Commissioner’s Office (ICO) published a consultation on Copyright and Artificial Intelligence, suggesting possible reforms including a copyright exemption for AI developers and a new rights reservation model. The proposals drew significant backlash from creative industries, with more than 11,000 responses submitted and vocal opposition from publishers, authors, and artists.

The issue escalated in January 2025, when Baroness Kidron tabled amendments to the Data (Use and Access) Bill to strengthen copyright protections for creators whose works could be used in AI training. The House of Lords passed the amendments on 28 January, setting the stage for a debate two days later. In the Lords discussion, figures such as Lord Black of Brentwood warned of damaging consequences for the UK’s creative economy, while Minister Lord Vallance of Balham defended the government’s proposals, suggesting metadata and watermarking might offer a technical path forward.

The Commons took up the issue again in April, with a dedicated debate on Intellectual Property and AI. MPs cited growing concern from creative professionals and trade bodies, including evidence that AI tools were already displacing human creators. A particular flashpoint was the reported use of 7.5 million pirated books to train Meta’s Llama 3 model. Closing the debate, Minister Chris Bryant acknowledged the complexity of the issue, committing to further consultation with creative industries and stating: “Artificial intelligence was made for humanity by humanity, not humanity made for artificial intelligence… we need to make sure that we get the balance right.”

Together, these developments reveal a UK legislative landscape that is active but unsettled — shaped by a government prioritising innovation and flexibility, and by lawmakers and stakeholders calling for clearer boundaries, protections, and statutory accountability.

 

Regulators and Cross-Sector Cooperation

The success of the UK’s AI governance model rests heavily on the shoulders of its regulators. In the past few years, these agencies have ramped up efforts to address AI within their remits, increasingly collaborating to present a united front.

The Digital Regulation Cooperation Forum (DRCF) – an alliance of the CMA, ICO, Ofcom, and FCA formed in 2020 – became a key venue for joint work on AI issues. Through 2022–2025 the DRCF, via their AI and Digital Hub, has helped coordinate research on algorithmic auditing, tools for explainability, and common positions on AI transparency and online safety. For innovators uncertain which rules might apply to a new AI product (for example, one that touches on privacy, competition and online harms at once), the DRCF even set up a multi-regulator advice service. This collaborative spirit has helped the UK’s multiple regulators avoid working in silos on AI.

Each regulator also launched its own AI initiatives:

  • The CMA (Competition and Markets Authority) in 2023 conducted a pioneering review of AI Foundation Models – the large AI models like GPT-4 that underlie many services. In an initial report published in September 2023, the CMA assessed how the emergence of powerful foundation models could affect competition and consumers. Rather than proposing to regulate the models directly, the CMA set out guiding principles for foundation model developers and deployers, such as accountability, transparency about limitations, and ensuring open choice to avoid market lock-in. In April 2024 it followed up with an update, noting rapid industry changes and reiterating that while competition law can address anti-competitive behaviour in AI, a collaborative approach with companies is preferred for now. The CMA’s work signalled to AI firms that the UK is watching for anti-competitive practices (like a few big players dominating crucial AI infrastructure), but also that it welcomes innovation that benefits consumers.

  • The Information Commissioner’s Office (ICO), the UK’s data protection regulator, has long been focused on AI’s impact on privacy and data rights. The ICO updated its guidance on AI and data protection in 2022, publishing an AI risk toolkit to help organisations comply with UK GDPR when using AI. It stressed issues like ensuring algorithms don’t unlawfully bias decisions or mishandle personal data. The ICO also worked on transparency – for instance, urging organisations to explain AI-assisted decisions to individuals. By 2023, the ICO joined other regulators in the DRCF to examine algorithmic auditing, and it launched sandbox programmes allowing AI developers to work with regulators on novel uses (like privacy-preserving machine learning techniques). Plans to reform UK data laws through the Data Protection and Digital Information Bill, initially introduced in 2023, aimed to clarify requirements around automated decision-making and update broader data regulations. However, the Bill did not complete its legislative journey before Parliament dissolved in May 2024 and was subsequently abandoned. In its place, the incoming Labour government introduced the Data (Use and Access) Bill (DUA Bill) in October 2024, proposing significant reforms. Notably, it aims to replace the Information Commissioner’s Office (ICO)—currently structured as a ‘corporation sole’ with authority vested in a single commissioner—with the Information Commission, a new statutory body corporate governed by a board. This structural change would align data protection oversight with other UK regulators, such as Ofcom and the Competition and Markets Authority, promoting collective decision-making and enhanced accountability. Throughout these developments, the ICO has maintained its position that existing data protection laws already provide sufficient authority to address harmful AI applications, such as discriminatory recruitment algorithms or misuse of personal data, and that new AI-specific privacy laws remain unnecessary. The Information Commissioner has provided an official response to the DUA Bill, available via the ICO website. As of April 2025, the Bill continues to progress through Parliament.

  • The Financial Conduct Authority (FCA) and the Bank of England also engaged closely with AI, given the increasing use of AI in financial services – from algorithmic trading to credit scoring and fraud detection. The FCA co-led, with the Bank’s Prudential Regulation Authority, a public-private forum on AI in financial services in 2022, which concluded that while AI can improve risk management and customer service, it also raises questions about model risk, ethics and accountability in finance. In 2023, the FCA signalled that existing rules like the Senior Managers Regime (which holds executives accountable for failures) and new Consumer Duty (requiring financial firms to avoid consumer harm) apply to AI usage as well. Essentially, if a bank deploys an AI decision system, it must still treat customers fairly and have humans accountable for its outcomes. The Bank of England, for its part, issued principles on Model Risk Management that explicitly cover machine learning models, ensuring banks properly validate and monitor their AI models. By 2024, the FCA was actively expanding its support for AI innovation, particularly within the fintech and insurtech sectors. Rather than launching a separate standalone ‘AI sandbox’, the FCA introduced targeted initiatives as part of its broader innovation framework. These included establishing an AI Lab within its Innovation Services, creating dedicated channels for firms to access AI-related insights and regulatory expertise. In January 2025, the FCA hosted events such as the AI Sprint, bringing together industry, technologists, and regulators to discuss effective regulatory approaches and underline the importance of safe, controlled testing environments. ABuilding on this momentum, on 29 April 2025, the FCA published an engagement paper outlining its proposal for AI Live Testing—a new structured programme enabling fintech and insurtech companies to develop, test, and deploy live AI models in partnership with FCA regulatory teams before full market entry. This initiative represents a significant step forward in providing robust regulatory support for responsible AI innovation.

  • Ofcom, the communications regulator, traditionally oversees broadcasting, telecoms and (newly) online content. Ofcom grew interested in AI both as a tool used by the industries it regulates (e.g. generative AI in social media or news) and as a regulatory aid. In mid-2023, Ofcom published a discussion note on What generative AI means for the communications sector highlighting risks such as AI-generated misinformation, deepfakes, and the impact on media plurality. It clarified that existing broadcasting rules (like accuracy in news) do cover synthetic media – for instance, a deepfake on TV would breach the code if it misled viewers. Ofcom also began examining technical solutions like deepfake detection, and engaged with video game and social media firms about AI moderation. Notably, as the UK’s Online Safety Bill was finalised (becoming law in 2023, giving Ofcom duties to oversee online platforms), Ofcom had to consider how companies’ AI-driven recommendation algorithms and content filters could be made transparent and safe for users. The regulator started working with platforms on voluntary measures to audit and adjust their AI systems that shape user content feeds, to comply with online safety objectives. By 2025, Ofcom was looking at AI both as a threat (e.g. bots spreading harmful content) and a tool (e.g. using AI to detect prohibited content at scale).

  • The Digital Regulation Cooperation Forum (DRCF) continued to play a key role in coordinating AI oversight across the UK’s major digital regulators. In its 2022–23 workplan, the DRCF made algorithmic processing one of its primary areas of focus, aiming to better understand the benefits, risks, and governance challenges associated with algorithms, particularly in high-impact sectors like finance and online platforms. As part of this, it published two foundational discussion papers on algorithmic auditing and the benefits and harms of algorithms, laying the groundwork for future regulatory collaboration. Building on this, the DRCF’s 2022–23 annual report detailed further progress, including deeper engagement with algorithmic auditing practices and enhanced cross-regulator coordination. One key area of ongoing activity was support for public sector algorithmic transparency, complementing the UK government’s rollout of the Algorithmic Transparency Recording Standard, which became mandatory for all central government departments in February 2024. In 2023–24, the DRCF sustained this focus through joint research and regulatory coordination, and by early 2025, it had embedded algorithmic auditing and transparency into its forward workplan. This included developing practical tools and guidance for regulated sectors, advising government on the implementation of the UK’s AI regulation principles, and facilitating cross-sector dialogue on managing algorithmic risks. The DRCF also played a convening role internationally, co-hosting a GovTech roundtable in The Hague and strengthening its position as a central hub for collaborative digital regulation in the UK.

In practical terms, by 2025 the UK’s array of regulators collectively form the enforcement mechanism of its AI principles. They also have a channel to escalate issues to DSIT if they feel new powers are needed. So far, they have favored guidance over punishment. No major AI-specific fines or enforcement actions have been issued (most interventions have used existing law, like the ICO fining a bank whose AI system breached privacy, or the CMA blocking a tech merger due in part to AI data concerns). But the groundwork is laid for a more interventionist stance if egregious harms emerge. This incremental approach – sometimes dubbed “go-slow” – drew a mix of praise and criticism: industry generally lauded the UK for being pragmatic and innovation-friendly, while some civil society voices worried it relied too much on voluntary corporate responsibility. The government’s bet is that, with regulators coordinating and engaging constructively with companies, the UK can manage AI risks without curbing the technology’s benefits. Time will tell if this cooperative oversight strikes the right balance, but it has certainly set the UK apart.

 

The UK as an AI Safety Champion on the Global Stage

Beginning in 2023, the UK cast itself not just as an AI innovator, but as a global leader in AI safety and governance. With concerns growing worldwide about the rapid advancement of “frontier” AI (highly advanced general models that could pose significant risks if misused or if they behave unpredictably), the UK government seized an opportunity to convene and lead international efforts on this front.

A major step was the creation of the Foundation Model Taskforce in April 2023, later referred to as the Frontier AI Taskforce. Announced with £100 million in funding, this taskforce was explicitly modeled on Britain’s Vaccine Taskforce (which had accelerated COVID-19 vaccine development). The goal was to similarly “accelerate the UK’s capability in safe and reliable foundation models”, ensuring the country has sovereign expertise in these powerful AI systems. Importantly, the taskforce had twin objectives: boosting AI capability (so UK innovators can build and adopt advanced AI) and safety research (so these models are developed responsibly). Tech entrepreneur Ian Hogarth was appointed as its chair in June 2023, reporting directly to the Prime Minister and DSIT. Under Hogarth’s leadership, the taskforce brought together experts from government, academia, and industry to investigate the frontier of AI, carrying out cutting-edge AI safety research ahead of an unprecedented global summit the UK planned to host.

In November 2023, the UK hosted the world’s first AI Safety Summit at Bletchley Park, the historic site of WWII codebreaking. This high-profile event convened governments from around 28 countries – including the US, China, EU, India, and others – as well as tech company leaders and AI experts, to discuss the risks posed by advanced AI and how to mitigate them. The choice of venue (Bletchley Park’s mansion, a birthplace of modern computing) was symbolic of Britain’s technological heritage and its intent to shape the future of AI in a positive direction. Over two days, delegates grappled with scenarios like AI being used for bioweapon design, autonomous cyber attacks, or even the far-off prospect of “existential risk” from a superintelligent AI gone rogue. While acknowledging AI’s huge benefits, the summit emphasised a need for guardrails on the most powerful systems – often termed “Frontier AI” in the discussion.

One tangible outcome was the Bletchley Declaration – a joint statement by the attending countries affirming that AI should be developed and used in a manner that is safe, human-centric, trustworthy and responsible. This declaration, while non-binding, was significant as a first explicit multi-nation agreement on AI safety principles. It recognised the “unique risks” posed by the latest AI and the need for international cooperation to manage them. Even countries with divergent views on regulation, like the US and China, found common ground in supporting further collaboration on technical research for AI safety.

Perhaps the most headline-grabbing announcement from Bletchley was a new “landmark” agreement on AI model testing. Prime Minister Sunak secured commitments from leading AI firms – Amazon, Anthropic, Google, DeepMind, Meta, Microsoft, OpenAI, Inflection and Mistral AI – to allow governments early access to their frontier models for safety testing before wider deployment. In other words, the companies agreed (voluntarily) to share their cutting-edge AI systems with officials and experts so that potential risks and vulnerabilities could be evaluated collaboratively. The UK’s Frontier AI Taskforce had already been given access to some proprietary models, but this deal formalised and expanded such cooperation, with a consortium of “like-minded” governments including the US and EU joining in the testing program. Sunak noted that until now “the only people testing the safety of new AI models have been the companies themselves – that must change”. Through this agreement, governments and companies would work together on pre-release AI safety testing, marking a new approach to tech governance that blends regulation with voluntary corporate responsibility and international teamwork.

To support these efforts, Sunak also announced the UK would establish a permanent AI Safety Institute, evolving from the Frontier AI Taskforce. This institute would put the taskforce’s work on a long-term footing – serving as a home for ongoing testing of advanced AI, research into AI alignment (how to ensure AI goals align with human values), and advising on AI guardrails. It reflects the UK’s intention to be the global hub for frontier AI safety research. (The United States, in parallel, indicated plans for its own AI Safety Institute under NIST, and the two countries signalled they would collaborate rather than duplicate efforts)

Another important outcome from the summit was agreement to set up an international panel on AI risks, akin to the IPCC (Intergovernmental Panel on Climate Change) but for AI. This Global AI Council (the exact name was to be determined) would gather experts and officials from many countries to continuously assess the state-of-the-art in AI and the evolving risks, publishing periodic “State of AI Science” reports. Famed AI researcher Yoshua Bengio was enlisted to help produce the first such report, to inform the next summit. The UK took on providing the secretariat for this panel, another indication of its leadership role.

By hosting the Bletchley Park summit, the UK positioned itself as a convener and bridge-builder on AI governance. Notably, it managed to get the US and China – rivals in AI development – to sit at the same table. Domestically, this boosted the UK’s profile as an AI diplomacy leader, complementing its national “pro-innovation” narrative with a global “pro-safety” agenda. This balance was very much in line with Sunak’s messaging: that Britain will embrace AI’s opportunities wholeheartedly but also lead in addressing its gravest risks. The summit was planned to be the first of a series: indeed, a follow-up AI Safety Summit was scheduled in Seoul, South Korea in May 2024 (co-hosted by the UK and Korean governments), and France was lined up to host a third in 2025 – creating an ongoing international process.

It’s worth noting the Frontier AI Taskforce continued its work into 2024, feeding into these summits. Its experts conducted evaluations of leading AI models’ behaviour, stress-testing them for failure modes (like giving dangerous advice or leaking private data). The taskforce’s findings helped shape discussions at Bletchley and beyond. For example, if a model tended to hallucinate misinformation, that underlined the need for transparency and public awareness; if it showed capability for complex biological analysis, that raised dual-use concerns (biotech misuse). This evidence-based approach strengthened the UK’s calls for shared safety standards. In fact, at Bletchley the UK advocated for developing shared technical standards and testing infrastructure for AI safety, so that all companies can test models against common benchmarks – much as is done in cybersecurity. The idea of “secure-by-design” AI found its way into international discourse, echoing principles the UK was simultaneously advancing at home.

 

AI and National Security

AI’s rapid advancement has also been scrutinised through the lens of national security and cyber defence in the UK. Government strategy in these years increasingly treated AI as both an asset and a potential threat in the security domain. The National Cyber Strategy 2022 identified AI as a transformative technology that could boost cyber defences but also empower adversaries. By 2023, the National Cyber Security Centre (NCSC) – a branch of GCHQ – was researching AI vulnerabilities (for example, publishing guidance on risks like prompt injection attacks against large language models) and advising organisations on how to secure AI systems. As more critical infrastructure and government systems incorporate AI, ensuring those AI components are secure from hacking or manipulation became a priority.

In January 2025, the UK made a notable move by publishing a Code of Practice for the Cyber Security of AI. This AI security code is essentially a set of 13 principles for developing and deploying AI systems with security in mind, covering the entire AI lifecycle from design to decommissioning. Principles range from “raise awareness of AI security threats” and “design your AI system for security” to “secure your supply chain” and “monitor your system’s behaviour”. Although compliance is voluntary, it’s a strong framework; if an organisation pledges to follow it, they are expected to meet all “required” provisions under each principle. To help implementation, a detailed guidance document was released alongside. The UK intends to contribute this work internationally – notably, it plans to submit the code to ETSI (the European telecoms standards body) to form the basis of a global standard. In effect, Britain is trying to set the benchmark for “secure by design” AI systems globally. By encouraging developers to integrate security as a core requirement (much like has been done for Internet-of-Things devices through previous UK codes of practice), the government hopes to prevent scenarios where AI models are easily hijacked or behave unpredictably under malicious inputs.

The code of practice also defines roles in the AI supply chain (developers, deployers, end-users) and tailors advice for each. For example, developers should document their training data and models, conduct rigorous testing (Principle 9), and build in human oversight (Principle 4), whereas operators should secure the infrastructure and monitor AI behaviour in real time. These practical measures link back to the theme of accountability and robustness from the AI principles. The difference is, here they are framed specifically in terms of security threats – such as protecting against model exfiltration, data poisoning, or adversarial examples that could manipulate AI outputs.

Beyond cybersecurity, AI figured into broader national security discussions. The UK’s Integrated Review of Security, Defence, and Foreign Policy (the 2021 edition and a 2023 refresh) highlighted AI as a strategic capability the nation must harness for intelligence and defence, while also shaping global norms for its responsible use. The Ministry of Defence released its own Defence AI Strategy in June 2022, committing to adopt AI across defence while adhering to ethical principles (the UK is a signatory of the US-led “Responsible AI in Defense” pledge). They established a Defence AI Centre to drive military AI innovation. By 2025, the MoD was trialling AI in areas like logistics planning, drone swarms, and intelligence analysis, always paired with human oversight. In terms of security threats, officials grew wary of AI being used for disinformation (deepfake propaganda), cyber attacks (automated hacking), and even the development of novel weapons. Consequently, the UK has been active in international talks about norms for military AI and autonomous weapons – advocating for maintaining human control over lethal force and transparency about AI use in warfare.

One intersection of AI and security is critical infrastructure resilience. The government’s National Cyber Security Strategy called for ensuring AI systems that manage vital services (energy grids, healthcare systems, etc.) are resilient to outages and attacks. This overlaps with secure-by-design principles. For instance, if hospitals use AI for diagnostics, NCSC advised they should have fallback plans if the AI fails or is compromised, and procurement standards to vet the security of AI solutions.

By weaving together these efforts – codes of practice, defence initiatives, and international engagement – the UK is trying to integrate AI into its security posture in a holistic way. The messaging is that security is not an afterthought in AI deployment but a prerequisite. In government speeches, one can find references to Alan Turing’s codebreaking as an early marriage of computing and security, with the UK determined to carry that legacy into the AI era, ensuring the nation’s AI systems are “secure by design and secure by default.”

 

AI in Government and Public Services

Even as the UK champions AI innovation and global safety, it has also been turning the lens inward: how can AI improve governance and public services at home, and how should the public sector adopt AI responsibly and transparently? Between 2021 and 2025, the UK civil service and local authorities started cautiously experimenting with AI tools, while putting in place mechanisms to uphold accountability.

One flagship initiative has been the development of an Algorithmic Transparency Standard for the public sector. After a pilot in 2022, the Central Digital and Data Office (CDDO) – essentially the Cabinet Office’s tech policy unit – introduced a framework requiring government departments to publish Algorithmic Transparency Records for the AI or automated decision systems they use. In February 2024, the government took the significant step of mandating this transparency standard across all central government departments. Now, any ministry or agency using an algorithmic tool that assists or makes decisions (from simple automated triage systems to complex predictive AI models) must create a public record on GOV.UK describing the tool, its purpose, how it works, the data it uses, and its impact. This requirement currently covers central government and will extend to the wider public sector (like local councils, health authorities, police, etc.) over time. Of course, provisions exist for withholding truly sensitive details (e.g., aspects of national security algorithms), but the presumption is for openness. The UK is thus one of the first countries to institute a mandatory algorithmic transparency regime for its public sector – a direct response to concerns that automated systems could be used opaquely in government, undermining accountability.

Complementing the UK-central government’s efforts, devolved governments have pursued their own transparency and ethics measures. Scotland, for instance, launched the Scottish AI Register in January 2025 – making it the first part of the UK to require registration of all AI systems in use across its public sector. The Scottish AI Register (a publicly accessible website) allows citizens to see what AI applications their government and local services are using or developing, along with plain-language explanations and even a feedback mechanism. This aligns with Scotland’s AI Strategy emphasis on “trustworthy, ethical and inclusive” AI. The Scottish Government has effectively mirrored and perhaps one-upped the UK government’s transparency standard by not just publishing static records but maintaining an interactive register tracking AI projects from pilot to deployment to retirement. Such transparency is intended to build public trust and enable scrutiny – for example, if Police Scotland trialled an algorithm to deploy officers, it would appear on the register for public and expert oversight.

Wales has taken a slightly different approach, focusing on the responsible use of AI in the workplace and public services. In December 2024, the Welsh Government’s Workforce Partnership Council (which brings together government, public sector employers and trade unions) issued new guidance on the ethical use of AI in public sector workplaces. This included reports like “Managing Technology that Manages People”, addressing the rise of AI in HR and workforce management, and calling for checks and balances to protect staff rights. Wales’ approach, dubbed the “Welsh way”, emphasises social partnership – any adoption of AI in a Welsh public service should involve consultation with employees and unions, ensure transparency and human oversight, and uphold fairness so that AI doesn’t become a tool for unfair worker surveillance or automated decision-making without recourse. As Jack Sargeant, Wales’ Minister for Social Partnership, put it: “Our approach ensures AI adoption in public services is transparent and underpinned by human oversight… reflecting collaborative decision-making that prioritises fairness, job security and workforce development.”. This perspective complements the UK-wide initiatives by adding a worker-centric, ethical lens to public sector AI use.

Across the UK, various pilot programmes have explored AI in service delivery. The National Health Service, building on its NHS AI Lab established in 2019, expanded trials of AI diagnostics (from reading medical scans to optimising hospital workflows). The results have been promising – e.g. AI helping detect breast cancer earlier or predicting patient readmissions – but the NHS has simultaneously been careful about validation and bias, publishing guidelines for “clinician oversight” of AI recommendations. In welfare services, the Department for Work and Pensions trialled machine learning to help identify fraud and error in benefit claims, but faced scrutiny to ensure this didn’t unfairly target certain groups. Local councils have experimented with chatbots to handle routine queries (freeing up human staff for complex cases) and with predictive analytics to improve services like waste collection or identify households in need of support. Each experiment came with a public communication about what was being done, reflecting the lesson that public trust is fragile if AI is introduced opaquely. One notable misstep was a 2020s-era attempt by some councils to use algorithms for benefits risk-scoring, which drew legal challenges over transparency. These experiences informed the push for the algorithmic transparency standard.

To foster accountability, the UK also looked at impact assessment tools. By 2025, it is becoming standard practice for government projects involving AI to conduct an AI Ethics or Data Protection Impact Assessment before deployment, evaluating potential bias, privacy intrusion, or legal risk. The Centre for Data Ethics and Innovation (CDEI) – now rebranded as the Responsible Technology Adoption Unit (RTAU) in DSIT – has developed toolkits to assist departments in this process. The RTAU itself underwent an evolution: originally an advisory body with an independent board, it had its external advisory board disbanded in late 2023, and in February 2024 it was renamed to better reflect its practical mission. Now as an internal unit, it focuses on delivering tools and guidance for responsible AI adoption in both public and private sectors. For example, RTAU helped CDDO with the algorithmic transparency standard and has worked on guidance for fair AI procurement (ensuring when government buys an AI system, it asks the right ethical questions of vendors).

The impact on the civil service workforce is another aspect. AI, especially new generative AI like chatGPT-style systems, has raised the possibility of automating some routine bureaucratic tasks. The UK government by 2025 is exploring how civil servants might use AI copilots for drafting documents, summarising large volumes of information, or customer service – always under human supervision. In fact, the AI Opportunities Action Plan 2025 (discussed in the next section) explicitly mentions freeing up officials from administrative drudgery so they can focus on high-value work. However, there is also awareness of the need to train staff in using AI appropriately. The Civil Service is developing training modules on data science and AI literacy, so employees understand the strengths and limits of AI tools. There’s a parallel effort to recruit or upskill specialists in AI within government – data analysts, AI ethicists, and technical experts who can build and oversee public sector AI projects. This was actually one recommendation of the AI Council’s 2021 roadmap: to bolster the government’s internal capacity on AI, so it can be an intelligent customer and user of the tech.

Transparency and accountability also extend to outcomes. The government is conscious that if an AI system makes a decision affecting someone – say, prioritising them for a service or flagging them for an intervention – that person should have an avenue to question or appeal it. The combination of the principles of contestability and transparency are meant to ensure that AI doesn’t become a black box excuse for “computer says no” in public services. Departments like the Ministry of Justice have explored this in the context of algorithm-assisted decisions in policing or courts, insisting that humans remain accountable and that there’s a clear line of responsibility.

In summary, the UK’s public sector is cautiously adopting AI to improve services and efficiency, but doing so hand-in-hand with measures to shine a light on those algorithms and keep humans in charge. This dual approach aims to capture AI’s benefits (faster processes, better insights, cost savings) while mitigating the well-known risks of bias, error or loss of public trust. By making transparency mandatory and involving devolved governments and unions in the conversation, the UK is trying to set an example of “responsible public sector AI” in practice.

 

The Devolved Approach

The devolved governments of Scotland, Wales, and Northern Ireland have each engaged with AI in ways that reflect their local priorities, while broadly complementing the UK-wide strategy. Although regulation of AI-related matters like data protection or competition is mostly reserved to Westminster, areas like education, health, and economic development are devolved, giving these nations scope to craft their own AI policies in those domains.

Scotland has been particularly proactive. Its Artificial Intelligence Strategy (launched in March 2021) set a vision for Scotland to be a leader in trustworthy, ethical, and inclusive AI. Implementation is coordinated by the Scottish AI Alliance, a partnership between the Scottish Government and academia/industry (notably the Data Lab innovation centre). Scottish initiatives often dovetail with UK ones but put a local spin on inclusion – for example, using AI to support Gaelic language preservation or ensuring rural communities benefit from AI-driven digital health services. As mentioned, Scotland introduced the AI Register to transparently catalog public sector AI systems. It has also funded programmes to encourage SMEs in Scotland to adopt AI responsibly and has hosted the Scottish AI Summit annually to bring together stakeholders. In terms of governance, Scotland adheres to the UK’s overall regulatory framework (the ICO covers Scotland too, for instance), but its government has voiced that it will strive for higher transparency and public engagement. A statement in late 2024 indicated Scotland intends to be “the first part of the UK to make it mandatory to register any use of AI within the public sector” which it effectively did with the AI Register mandate. Culturally, there’s a strong emphasis on public dialogue – the Scottish Government and AI Alliance have run citizen’s forums and published accessible explainers on AI, reflecting an ethos of demystifying AI for the public.

Wales, while not having a standalone AI strategy, embedded AI considerations into its Digital Strategy for Wales (2021). The Welsh Government highlights using AI “ethically and with integrity” to improve public services. A distinctive element in Wales is the social partnership model, which we saw in their guidance on algorithmic workforce management. Wales is also interested in AI for the Welsh language – supporting technology development so that voice assistants and translation tools work for Welsh speakers. In healthcare, Life Sciences Hub Wales set up an AI in Health and Social Care programme to evaluate and pilot AI solutions in the NHS Wales context. While Wales generally aligns with UK policy (it participates in the UK’s AI governance consultations and is covered by UK regulator decisions), it is keen on inclusive innovation – ensuring rural and valley communities, not just Cardiff or Swansea, gain from AI-driven economic growth. The FinTech Wales report in 2023 even touted how Wales could ride the UK’s AI wave to create thousands of jobs in AI and data, provided there’s investment in skills and regional tech hubs. This shows Wales positioning itself to benefit from the UK’s national AI push, while championing worker rights and linguistic/cultural considerations.

Northern Ireland is actively shaping its approach to artificial intelligence despite recent political instability and the absence of a fully functioning Executive. While a formal AI strategy has yet to be published, recent initiatives signal growing momentum. One such milestone was the AI Castle Conversation at Hillsborough Castle, hosted by the Artificial Intelligence Collaboration Centre (AICC) with support from Invest NI and the Department for the Economy, aimed at informing future AI priorities across government, industry, and academia. The region has strong foundations in AI and cybersecurity, with hubs in Belfast and Derry supported by Queen’s University Belfast, Ulster University, and the Centre for Secure Information Technologies (CSIT). These strengths are further underpinned by the Northern Ireland Cyber Security Centre and the Northern Ireland Cyber AI Hub, whose progress was evaluated in a UK government report published in January 2025. The Department for the Economy continues to back AI and data science through innovation programmes, while Northern Irish researchers contribute to UK-wide projects funded by UKRI and local firms benefit from national AI funding. Distinct strengths in FinTech and AI-driven cybersecurity position Northern Ireland as a key regional contributor to the UK’s broader AI ecosystem.

In summary, while the central UK government sets much of the regulatory tone, devolved governments add their flavour and sometimes push further on certain values. Scotland leads in formalising transparency and public trust measures; Wales leads in worker-centric ethical AI use; Northern Ireland, despite challenges, is an important part of the UK’s AI research and industry landscape, especially in security applications. All devolved administrations share an interest in ensuring Westminster’s policies account for their needs – for instance, ensuring the “AI benefits all sectors and regions” goal of the National AI Strategy isn’t England-centric. Indeed, the National AI Strategy explicitly referenced strengthening AI outside the South East of England, and devolved engagement is key to that. A practical example: the UK government funded AI innovation clusters, including one in Scotland (the Edinburgh AI hub leveraging University of Edinburgh’s strengths) and supported AI research centres that collaborate with Welsh universities – showing an intent to spread the AI boom.

Politically, there is general alignment between UK and devolved governments on AI being an opportunity to seize. Even when different parties are in power (e.g. the SNP in Scotland or Labour in Wales), AI is seen largely as a non-partisan issue where collaboration is beneficial. That said, nuances appear: Scottish ministers have occasionally hinted they’d prefer the UK to take a slightly more precautionary approach akin to the EU’s, given Scotland’s emphasis on ethics. But they have also welcomed the UK’s global AI summit leadership, with Scottish experts participating in those efforts. Wales has perhaps been most directly supportive of the UK’s balanced approach, as long as it can incorporate Welsh perspectives.

In conclusion, the devolved nations act as both partners and laboratories in the UK’s AI journey – piloting ideas like an AI register or workforce guidelines that, if successful, could inform policy across the UK. They ensure the UK’s AI strategy isn’t one-size-fits-all, and that regional innovation and ethical priorities feed into the national narrative.

 

Partnerships with Industry

From the outset, the UK’s AI strategy has hinged on a close partnership between government, academia, and the private sector. Unlike some countries where AI policy is more state-driven, Britain’s approach has been to engage industry experts at every step – both to harness their expertise and to shape policies that industry will buy into. This collaboration can be seen in the composition of advisory bodies, co-created initiatives on standards and ethics, and in the very philosophy of “pro-innovation” regulation which largely trusts companies to act responsibly under light oversight.

One of the earliest examples is the AI Council itself: an expert committee populated by tech industry leaders (from companies like Google DeepMind, Microsoft, etc.), top academics, and other stakeholders. The AI Council’s influence on the 2021 strategy ensured that industry perspectives – on needed skills, on research investment, on not over-regulating nascent AI – were baked into the plan. After delivering the roadmap and seeing the strategy launched, the AI Council’s formal role wound down by 2023 (its fixed term ended, and the government chose not to renew the independent advisory board). But its function was somewhat replaced by a more direct line between industry and ministers: for instance, Matt Clifford, a tech entrepreneur (co-founder of Entrepreneur First) with deep ties in the startup ecosystem, became an advisor to the Prime Minister on AI in late 2024. Clifford was tasked with identifying how the UK could seize AI opportunities and was soon leading the development of the AI Opportunities Action Plan. This highlights a shift towards embedding industry experts within government decision-making rather than in arm’s-length advisory boards.

Throughout 2022–2025, the government has frequently convened roundtables and working groups with industry on AI topics – be it consulting tech firms on the AI White Paper, or bringing in AI startups to discuss barriers they face. The resulting policies – like the flexible regulatory approach – have often been warmly welcomed by companies as avoiding heavy compliance burdens. Indeed, when the 2023 White Paper was released, many UK AI companies and trade groups like techUK praised it for its light-touch nature and for “listening to industry concerns” about not stifling innovation. This buy-in is important: the government is relying on companies to voluntarily implement best practices (since enforcement is relatively hands-off). So far, many large companies have at least outwardly committed to principles of responsible AI, and the UK government has leveraged those commitments. For example, several major AI developers publicly supported the Bletchley Declaration principles of safe and trustworthy AI, and their agreement to model testing access (mentioned earlier) is a very concrete industry pledge.

On the development side, the UK has continued to invest in public-private research partnerships. Funding programmes via UK Research and Innovation (UKRI) have supported joint projects between universities and companies, such as developing explainable AI methods for finance, or AI for sustainable agriculture. The Turing AI Fellowships (named after Alan Turing) have attracted top researchers (often in partnership with industry) to work on cutting-edge AI in the UK. The goal is twofold: advance AI capabilities and embed a culture of ethics and safety from the start. Private companies often co-sponsor these projects, contributing data or computing resources. Additionally, the UK government has launched several initiatives to support AI startups and integrate AI solutions into public services. For example, Innovate UK’s BridgeAI programme provides funding for AI projects addressing sector-specific challenges. Additionally, over £7 million has been allocated to trial AI tools aimed at enhancing productivity in sectors like agriculture and transport. These efforts reflect the government’s commitment to fostering AI entrepreneurship and maintaining the UK’s position as a global AI hub

Standards and safety efforts have also been co-driven with industry. The AI Standards Hub, launched in October 2022, is a partnership: led by the Alan Turing Institute (which bridges academia and industry), with the British Standards Institution (BSI) and NPL (National Physical Lab) involved, and backed by government funding. Its mission is to bring experts together to shape technical standards for AI that can be adopted internationally. Over its first year, the Hub convened workshops with industry representatives to identify priority areas for standards (like transparency in AI systems, or safety requirements for autonomous vehicles). By 2023, it had input into the development of ISO and IEEE AI standards, effectively giving UK companies a voice in those global processes. This kind of industry collaboration ensures that any standards are practical and that UK firms are well-prepared (or even advantaged) by them. An independent review on the one-year anniversary found that stakeholders saw the UK as a global leader in AI standards, thanks in part to the Hub’s work.

When it comes to AI ethics, earlier institutions like the CDEI engaged a lot with industry (hosting advisory panels on topics like bias in insurance algorithms, where insurers took part). As CDEI transitions to RTAU, it continues to involve companies in creating tools – for example, developing an Algorithmic Bias Mitigation toolkit with input from tech firms and think tanks, so organisations can self-assess their AI for biases. This collaborative ethos extends to governance experiments: the FCA’s TechSprint events on AI and financial crime invited RegTech startups to demonstrate solutions to regulators. Similarly, the ICO’s sandbox allowed companies like Google and fintechs to test innovative AI uses under regulator guidance, giving regulators insight and companies feedback. These forums build mutual understanding – regulators learn the technology’s realities; companies learn the regulators’ expectations – ideally leading to smarter self-regulation by industry.

The UK government has also leaned on industry partnership for infrastructure. Recognising that cutting-edge AI demands immense computing power, the Chancellor’s 2023 Budget included £900 million for a new AI Research Resource (essentially a powerful supercomputer cluster) and dedicated exascale computing capacity. In implementing this, the government has worked with hardware companies (like NVIDIA, which dominates AI GPUs) and UK-based chip designers (like Graphcore) to build the required compute. While publicly funded, this infrastructure will likely be used by both academic researchers and private labs, accelerating AI development on UK soil. The government’s logic is to provide a public good (compute resources) that de-risks private R&D investment – companies are more likely to develop breakthrough models in the UK if they have access to world-class compute and talent.

The UK government has also partnered with industry to build the infrastructure needed for advanced AI development. Recognising that cutting-edge models require vast computational power, the Chancellor’s 2023 Budget committed £900 million to create a new AI Research Resource (AIRR), including a next-generation exascale supercomputer. The government’s aim is to make world-class compute accessible to both academic institutions and AI developers, reducing barriers to experimentation and attracting commercial R&D investment. This commitment was reaffirmed and expanded in the AI Opportunities Action Plan (January 2025), which described AIRR as critical national infrastructure to support model training and evaluation across sectors. In March 2025, DSIT issued a formal request for information from prospective partners to help scale up the AIRR, confirming plans for a federated compute model, secure access controls, and multi-stakeholder use. Implementation involves collaboration with industry leaders such as NVIDIA (which dominates GPU provision for AI workloads) and UK-based chip firms like Graphcore. While publicly funded, the AIRR is designed as a shared public good: a platform that enables AI researchers and businesses to access compute at scale, boosting Britain’s competitiveness in training and testing large models within its own borders.

In a parallel move to support responsible innovation through strategic partnerships, in February 2025, the UK government signed a Memorandum of Understanding (MoU) with Anthropic, the AI safety company behind the Claude language model. Signed by Peter Kyle, Secretary of State for Science, Innovation and Technology, and Dario Amodei, Anthropic’s CEO, the agreement set out plans to explore the use of frontier AI systems to improve public service delivery and inform AI policy. It also outlined shared priorities around safe deployment, situational awareness, and support for the wider innovation ecosystem — including through Anthropic’s tools such as the Economic Index. The MoU underscores the government’s emphasis on public-private collaboration as a key pillar of the UK’s AI strategy.

The government has positioned the UK as ‘the best place to start and grow an AI company’ by combining targeted public investment with a light-touch, pro-innovation regulatory framework. Indeed, by 2025 the UK has seen notable investments: Google’s DeepMind remains in London and expanding, OpenAI chose to open its first international office in London in 2023, and numerous AI startups have sprung up in “AI clusters” from Cambridge to Edinburgh. The government often works in tandem with these players – for example, DeepMind’s CEO Demis Hassabis was tapped as an advisor to the Frontier AI Taskforcegov.uk, bringing insider expertise. And companies like OpenAI participated in drafting safety measures (OpenAI’s CEO Sam Altman engaged with UK regulators and ministers, even as OpenAI benefited from the UK’s non-restrictive regime compared to potential EU rules).

Ethically, the presence of companies in advisory roles does draw careful watch from independent voices: consumer groups and academics warn against “regulatory capture” – the risk that industry might unduly shape rules in its favour. The UK government has tried to balance this by also funding independent research (e.g. through academia or bodies like the Ada Lovelace Institute) and by inviting civil society into consultations. Still, there’s no denying the UK’s approach is industry-friendly. The bet is that responsible innovation will be driven more effectively by cooperation than by confrontation. The Bletchley Park summit’s voluntary agreements with industry exemplify this – rather than mandating companies to hand over AI models (as the US is edging towards via its Executive Order), the UK got them to agree to do so, arguably quicker and with less resistance.

Finally, many earlier structures have been superseded or evolved in this partnering journey. The Office for AI, initially a small policy unit, is now part of DSIT’s much larger tech directorate – its role taken over by DSIT ministers and advisors interacting directly with CEOs and researchers. The AI Council’s legacy lives on in the ongoing dialogue between government and AI community, even if the formal council is gone. The CDEI’s transformation into RTAU in 2024 symbolises a shift from arm’s-length ethical reflection to hands-on implementation with industry of trustworthy AI tools. In other words, what was once an external “nag” on ethics is now an internal team embedding ethics into deployments. This could be seen as the government bringing ethics closer to the engine room, but it also means the loss of an independent watchdog function – something to watch in future.

Overall, the UK’s partnership approach has yielded a narrative in which companies are allies in delivering the national AI strategy, rather than adversaries to be regulated. Companies have helped shape policies like the pro-innovation framework, and in turn government support has helped companies – via funding, talent initiatives, and global promotion of the UK as an AI-friendly jurisdiction. This symbiosis is evident in the January 2025 announcement that leading tech firms had committed £14 billion of investment and over 13,000 jobs in the UK following the AI Action Plan’s launch. It’s a virtuous cycle the government is keen to advertise: good policy attracts industry investment, which creates jobs and advances AI, which then justifies the policy. The challenge will be ensuring that this closeness doesn’t lead to blind spots in regulating genuine harms – a balance the UK will continue to navigate as its AI sector, now booming, inevitably produces both success stories and controversies.

 

From National Strategy to AI Action Plan

By the start of 2025, the UK’s approach to AI had come full circle to a degree – back to a focus on maximising opportunities and economic growth, but now informed by the lessons and structures built over the previous four years. In January 2025, the government (now under a new Prime Minister) announced a comprehensive AI Opportunities Action Plan as part of a broader agenda to “deliver a decade of national renewal”. This action plan is essentially an ambitious blueprint to mainline AI into every sector of the economy and public services, reflecting a confidence that the groundwork on safety and governance has been laid.

What changed by 2025 was partly political leadership and tone. The Action Plan, spearheaded by advisor Matt Clifford, contained 50 recommendations to turbocharge AI adoption across government and industry. The new Prime Minister embraced all 50, signalling a decisive acceleration. It was described as a “marked move from the previous government’s approach” – suggesting an intent to go further and faster than before. In practical terms, the plan includes creating “AI Growth Zones” (areas with special incentives and streamlined planning for AI companies), injecting AI into infrastructure projects, and driving use of AI in education, healthcare, and local government at scale. For example, it envisions AI helping speed up urban planning consultations, spotting potholes via computer vision to fix roads, and acting as teaching assistants to reduce teachers’ admin load. The underlying goal is to boost productivity – with references to IMF estimates that fully embracing AI could raise the UK’s productivity growth by 1.5 percentage points annually. In a country that has struggled with productivity, this is a tantalising prospect of higher growth and living standards.

The Action Plan doesn’t jettison the safety or ethical considerations but places them within a narrative of Britain going “all-in” on AI potential. It underscores that blockers to AI deployment will be removed, departments will be held accountable for actually implementing AI projects (not just talking about them), and the workforce will be supported to adapt. Notably, Matt Clifford’s appointment as the PM’s AI Opportunities Advisor and Demis Hassabis’s role as an expert advisor show the continuing reliance on tech luminaries to guide policy. Clifford himself stated the plan “puts us all-in – backing the potential of AI to grow our economy, improve lives for citizens, and make us a global hub for AI investment”. This captures the optimistic ethos: the UK, having spent years creating a permissive yet responsible environment, now wants to unleash AI fully to reap dividends.

Of course, “unleashing” comes with the caveat that it should be done right. The initiatives around regulation, safety, standards, and public sector ethics we’ve discussed are the guardrails that make this big push palatable. The government can encourage civil servants to use AI assistants only because it has, in parallel, put transparency and accountability rules in place. It can fast-track AI infrastructure with Growth Zones because it also has an eye on security and safety testing for frontier AI. In fact, the Action Plan explicitly builds on earlier foundations: it cites the pro-innovation regulatory framework as something to be carried forward (now with an expectation that regulators report on their progress and possibly consider future statutory backing for frontier AI oversight). It also references the outcomes of the AI Safety Summit, leveraging the UK’s global leadership to assure that even as we push AI deployment, we remain cognisant of frontier risks.

Another evolution by 2025 is institutional continuity. DSIT remains the orchestrator of AI strategy, but there is stronger coordination from the centre (No.10) with the PM’s office directly driving cross-government efforts. This is necessary to overcome what was sometimes a siloed approach – the Action Plan basically compels all departments to adopt AI where beneficial. In doing so, it perhaps marks the supersession of the 2021 AI Strategy by a more operational plan. Strategies often set vision; this 2025 plan is about execution and tangible outcomes. For example, where the 2021 strategy talked about “AI for public good” in general terms, the 2025 plan commits to specific projects like AI-powered NHS diagnostic centres and AI-assisted border security checks.

The 2025 Action Plan also has political significance. For the newly elected government, it signals a commitment to modernisation, innovation-led growth, and a future-ready public sector at a time when public interest in AI remains high.

Looking over 2021 to 2025, the story arc is clear: the UK moved from strategy formation and initial capacity-building to agile regulation and global thought-leadership, and now to a full-court press on implementation and uptake. The National AI Strategy provided the compass, the pro-innovation framework drew the map, the global safety work built international guardrails, and now the AI Opportunities Action Plan is the vehicle to drive forward. The UK’s narrative blends enthusiasm for what AI can do – economically and for society – with a recognition (sometimes hard-won) that governance, security, and ethics cannot be afterthoughts.

By April 2025, the UK stands as a country trying to lead in both innovation and regulation of AI, a delicate balancing act. It has avoided heavy-handed rules that might scare off investment, yet also avoided the laissez-faire Wild West that could undermine public trust. It has shown it can convene superpowers to talk about AI risks, while at home rolling out AI in classrooms and clinics. Challenges remain aplenty: ensuring the regulatory principles actually bite when needed, keeping public trust as AI systems become widespread, maintaining an edge in AI research and talent against fierce global competition, and reacting nimbly to whatever new breakthrough (or crisis) AI brings next. But as of 2025, the UK has crafted a distinctive approach – “pro-innovation and pro-safety” in equal measure – and a narrative that AI, done the British way, will be a boon for both the economy and society.

Next
Next

Is DeepSeek safe?