UK Sets New Standards for AI Deployment

UK Government’s New AI Strategy

The UK government, led by Tech Minister Liz Kendall, has outlined a two-pronged approach to strengthen the nation’s position in artificial intelligence. The plan focuses on supporting domestic AI companies—particularly in AI hardware—and on collaborating with international partners to set global standards for AI deployment.

Key Shifts in Policy

1. Support for British AI Companies: The government will back firms that excel in areas where the UK has strong expertise, such as AI chip design and manufacturing. A new AI Hardware Plan is slated for launch at London Tech Week in June, with an ambition to capture 5% of the global AI chip market.

2. International Standards Coordination: The UK will work closely with other “middle-power” nations to develop and disseminate best-practice guidelines for AI model evaluation. In July, at the meeting of the International Network of AI Security Institutes (which the UK chairs), the government intends to publish these guidelines.

International Network of AI Security Institutes

Established in November 2024, the network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK, and the US. The UK’s AI Security Institute, created under the previous Sunak administration, acts as the Network Coordinator and is recognized as a leader in AI safety testing.

The forthcoming guidance will aim to help member institutes conduct robust safety tests on AI models before and after deployment, promoting a high global standard for AI safety.

Domestic Context and Legislative Background

Previous government actions have emphasized AI safety:

  • At the inaugural AI Safety Summit in November 2023, the UK coordinated a voluntary commitment among major AI developers and nations to test models pre‑ and post‑deployment.
  • In July 2024, the Labour government announced plans for statutory legislation requiring AI developers to share testing data with the government, though these proposals were later shelved.

Despite the shelving of binding legislation, the current strategy reaffirms the UK’s focus on rigorous evaluation of AI models as a cornerstone of responsible AI deployment.

Comparison with the EU AI Act

The EU AI Act prioritises the protection of fundamental rights and categorises AI systems by risk. While the UK’s approach also seeks public safety, it places greater emphasis on technical safety testing and collaboration with AI developers, rather than a rights‑based regulatory framework.

Implications for the Future

The upcoming AI Hardware Plan and the international best-practice publication could position the UK as a global hub for safe and secure AI innovation. Stakeholders should monitor the July meeting outcomes, as they will indicate whether the guidance will be voluntary or move toward formal compliance mechanisms.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...