UK Government’s New AI Strategy
The UK government, led by Tech Minister Liz Kendall, has outlined a two-pronged approach to strengthen the nation’s position in artificial intelligence. The plan focuses on supporting domestic AI companies—particularly in AI hardware—and on collaborating with international partners to set global standards for AI deployment.
Key Shifts in Policy
1. Support for British AI Companies: The government will back firms that excel in areas where the UK has strong expertise, such as AI chip design and manufacturing. A new AI Hardware Plan is slated for launch at London Tech Week in June, with an ambition to capture 5% of the global AI chip market.
2. International Standards Coordination: The UK will work closely with other “middle-power” nations to develop and disseminate best-practice guidelines for AI model evaluation. In July, at the meeting of the International Network of AI Security Institutes (which the UK chairs), the government intends to publish these guidelines.
International Network of AI Security Institutes
Established in November 2024, the network includes Australia, Canada, the EU, France, Japan, Kenya, the Republic of Korea, Singapore, the UK, and the US. The UK’s AI Security Institute, created under the previous Sunak administration, acts as the Network Coordinator and is recognized as a leader in AI safety testing.
The forthcoming guidance will aim to help member institutes conduct robust safety tests on AI models before and after deployment, promoting a high global standard for AI safety.
Domestic Context and Legislative Background
Previous government actions have emphasized AI safety:
- At the inaugural AI Safety Summit in November 2023, the UK coordinated a voluntary commitment among major AI developers and nations to test models pre‑ and post‑deployment.
- In July 2024, the Labour government announced plans for statutory legislation requiring AI developers to share testing data with the government, though these proposals were later shelved.
Despite the shelving of binding legislation, the current strategy reaffirms the UK’s focus on rigorous evaluation of AI models as a cornerstone of responsible AI deployment.
Comparison with the EU AI Act
The EU AI Act prioritises the protection of fundamental rights and categorises AI systems by risk. While the UK’s approach also seeks public safety, it places greater emphasis on technical safety testing and collaboration with AI developers, rather than a rights‑based regulatory framework.
Implications for the Future
The upcoming AI Hardware Plan and the international best-practice publication could position the UK as a global hub for safe and secure AI innovation. Stakeholders should monitor the July meeting outcomes, as they will indicate whether the guidance will be voluntary or move toward formal compliance mechanisms.