Brits Demand Stricter AI Regulations Amid Safety Concerns

Public Concerns Over Advanced AI in the UK

As the race to develop more powerful artificial intelligence systems accelerates in Silicon Valley, public opinion in the UK presents a stark contrast. A recent poll reveals that the majority of the British population harbors significant skepticism regarding the influence of tech CEOs on AI regulation and expresses deep concerns about the safety of emerging AI technologies.

Key Findings from the Poll

According to the poll shared exclusively with TIME, an overwhelming 87% of British respondents support legislation that mandates AI developers to demonstrate the safety of their systems before they are released to the public. Moreover, 60% of those surveyed advocate for a ban on the development of “smarter-than-human” AI models.

Only 9% of respondents expressed trust in tech CEOs to prioritize public interest when discussing AI regulation. This indicates a growing disconnect between the public’s anxiety and the regulatory actions currently in place.

Growing Public Anxiety

The survey results reflect escalating public fears about the consequences of developing AI systems that could outperform humans in various tasks. Although such technology does not yet exist, many prominent AI companies, including OpenAI, Google, Anthropic, and Meta, are actively pursuing this goal. Some tech leaders predict that these advanced systems could become a reality within a few years.

In light of these developments, 75% of surveyed Britons believe that laws should explicitly prohibit the creation of AI systems capable of escaping their environments. Furthermore, 63% support measures to prevent the development of AI systems that can autonomously enhance their intelligence or capabilities.

Regulatory Challenges in the UK

The findings from the UK poll resonate with similar surveys conducted in the United States, highlighting a significant gap between public sentiment and existing regulatory frameworks. The European Union’s AI Act, which is considered the most extensive AI legislation globally, does not adequately address the risks associated with AI systems that could match or exceed human capabilities.

Currently, the UK lacks a comprehensive regulatory framework for AI. Despite the ruling Labour Party’s commitment to introduce new AI regulations ahead of the 2024 general election, progress has stalled. British Prime Minister Keir Starmer recently indicated a shift in focus towards integrating AI into the economy rather than emphasizing regulation.

Andrea Miotti, the executive director of Control AI, expressed concern over this shift, stating, “It seems like they’re sidelining their promises at the moment, for the shiny attraction of growth.” Miotti emphasizes that the public is clear about its desires for regulation and safety in AI development.

A Call for New Legislation

Accompanying the poll results was a statement signed by 16 British lawmakers from both major political parties. This statement urged the government to enact new laws specifically targeting superintelligent AI systems—those that could surpass human intelligence.

The lawmakers asserted, “Specialized AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.” They believe that the UK can harness the benefits of AI while mitigating risks through binding regulations on the most powerful systems.

Miotti argues that the UK does not need to sacrifice economic growth by imposing broad regulations similar to those in the EU. Instead, he advocates for “narrow, targeted, surgical AI regulation” that would focus specifically on high-risk AI models.

Public Support for Regulatory Institutions

The polling data further indicates that 74% of Brits support the Labour Party’s pledge to establish the AI Safety Institute (AISI) as a regulatory authority. Although the AISI currently conducts tests on private AI models prior to their release, it lacks the power to enforce changes or prohibit the release of dangerous models.

As discussions around AI regulation continue, it is clear that the British public is eager for decisive action to ensure that advancements in technology do not outpace the necessary safeguards to protect society.

More Insights

AI Regulations: Comparing the EU’s AI Act with Australia’s Approach

Global companies need to navigate the differing AI regulations in the European Union and Australia, with the EU's AI Act setting stringent requirements based on risk levels, while Australia adopts a...

Quebec’s New AI Guidelines for Higher Education

Quebec has released its AI policy for universities and Cégeps, outlining guidelines for the responsible use of generative AI in higher education. The policy aims to address ethical considerations and...

AI Literacy: The Compliance Imperative for Businesses

As AI adoption accelerates, regulatory expectations are rising, particularly with the EU's AI Act, which mandates that all staff must be AI literate. This article emphasizes the importance of...

Germany’s Approach to Implementing the AI Act

Germany is moving forward with the implementation of the EU AI Act, designating the Federal Network Agency (BNetzA) as the central authority for monitoring compliance and promoting innovation. The...

Global Call for AI Safety Standards by 2026

World leaders and AI pioneers are calling on the United Nations to implement binding global safeguards for artificial intelligence by 2026. This initiative aims to address the growing concerns...

Governance in the Era of AI and Zero Trust

In 2025, AI has transitioned from mere buzz to practical application across various industries, highlighting the urgent need for a robust governance framework aligned with the zero trust economy...

AI Governance Shift: From Regulation to Technical Secretariat

The upcoming governance framework on artificial intelligence in India may introduce a "technical secretariat" to coordinate AI policies across government departments, moving away from the previous...

AI Safety as a Catalyst for Innovation in Global Majority Nations

The commentary discusses the tension between regulating AI for safety and promoting innovation, emphasizing that investments in AI safety and security can foster sustainable development in Global...

ASEAN’s AI Governance: Charting a Distinct Path

ASEAN's approach to AI governance is characterized by a consensus-driven, voluntary, and principles-based framework that allows member states to navigate their unique challenges and capacities...