Brits Demand Stricter AI Regulations Amid Safety Concerns

Public Concerns Over Advanced AI in the UK

As the race to develop more powerful artificial intelligence systems accelerates in Silicon Valley, public opinion in the UK presents a stark contrast. A recent poll reveals that the majority of the British population harbors significant skepticism regarding the influence of tech CEOs on AI regulation and expresses deep concerns about the safety of emerging AI technologies.

Key Findings from the Poll

According to the poll shared exclusively with TIME, an overwhelming 87% of British respondents support legislation that mandates AI developers to demonstrate the safety of their systems before they are released to the public. Moreover, 60% of those surveyed advocate for a ban on the development of “smarter-than-human” AI models.

Only 9% of respondents expressed trust in tech CEOs to prioritize public interest when discussing AI regulation. This indicates a growing disconnect between the public’s anxiety and the regulatory actions currently in place.

Growing Public Anxiety

The survey results reflect escalating public fears about the consequences of developing AI systems that could outperform humans in various tasks. Although such technology does not yet exist, many prominent AI companies, including OpenAI, Google, Anthropic, and Meta, are actively pursuing this goal. Some tech leaders predict that these advanced systems could become a reality within a few years.

In light of these developments, 75% of surveyed Britons believe that laws should explicitly prohibit the creation of AI systems capable of escaping their environments. Furthermore, 63% support measures to prevent the development of AI systems that can autonomously enhance their intelligence or capabilities.

Regulatory Challenges in the UK

The findings from the UK poll resonate with similar surveys conducted in the United States, highlighting a significant gap between public sentiment and existing regulatory frameworks. The European Union’s AI Act, which is considered the most extensive AI legislation globally, does not adequately address the risks associated with AI systems that could match or exceed human capabilities.

Currently, the UK lacks a comprehensive regulatory framework for AI. Despite the ruling Labour Party’s commitment to introduce new AI regulations ahead of the 2024 general election, progress has stalled. British Prime Minister Keir Starmer recently indicated a shift in focus towards integrating AI into the economy rather than emphasizing regulation.

Andrea Miotti, the executive director of Control AI, expressed concern over this shift, stating, “It seems like they’re sidelining their promises at the moment, for the shiny attraction of growth.” Miotti emphasizes that the public is clear about its desires for regulation and safety in AI development.

A Call for New Legislation

Accompanying the poll results was a statement signed by 16 British lawmakers from both major political parties. This statement urged the government to enact new laws specifically targeting superintelligent AI systems—those that could surpass human intelligence.

The lawmakers asserted, “Specialized AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.” They believe that the UK can harness the benefits of AI while mitigating risks through binding regulations on the most powerful systems.

Miotti argues that the UK does not need to sacrifice economic growth by imposing broad regulations similar to those in the EU. Instead, he advocates for “narrow, targeted, surgical AI regulation” that would focus specifically on high-risk AI models.

Public Support for Regulatory Institutions

The polling data further indicates that 74% of Brits support the Labour Party’s pledge to establish the AI Safety Institute (AISI) as a regulatory authority. Although the AISI currently conducts tests on private AI models prior to their release, it lacks the power to enforce changes or prohibit the release of dangerous models.

As discussions around AI regulation continue, it is clear that the British public is eager for decisive action to ensure that advancements in technology do not outpace the necessary safeguards to protect society.

More Insights

Europe Faces Pressure to Abandon AI Regulation Amid U.S. Influence

The Trump administration is urging Europe to abandon a proposed AI rulebook that would impose stricter standards on AI developers. The U.S. government argues that these regulations could unfairly...

Avoiding AI Compliance Pitfalls in the Workplace

In the rapidly evolving landscape of artificial intelligence, organizations must be vigilant about compliance to avoid significant legal and operational pitfalls. This article provides practical...

Mastering AI Governance: Essential Strategies for Brands and Agencies

AI governance is essential for brands and agencies to ensure that artificial intelligence systems are used responsibly, ethically, and effectively. It involves processes and policies that mitigate...

AI Agents: Balancing Innovation with Accountability

Companies across industries are rapidly adopting AI agents, which are generative AI systems designed to act autonomously and make decisions without constant human input. However, the increased...

UAE’s Pioneering Approach to AI Governance

Experts indicate that the United Arab Emirates is experiencing a shift towards institutionalized governance of artificial intelligence. This development aims to ensure that AI technologies are...

US Pushes Back Against EU AI Regulations, Leaving Enterprises to Set Their Own Standards

The US is pushing to eliminate the EU AI Act's code of practice, arguing that it stifles innovation and imposes unnecessary burdens on enterprises. This shift in regulatory responsibility could...

Big Tech’s Vision for AI Regulations in the U.S.

Big Tech companies, AI startups, and financial institutions have expressed their priorities for the U.S. AI Action Plan, emphasizing the need for unified regulations, energy infrastructure, and...

Czechia’s Path to Complying with EU AI Regulations

The European Union's Artificial Intelligence Act introduces significant regulations for the use of AI, particularly in high-risk areas such as critical infrastructure and medical devices. Czechia is...

Mastering Compliance with the EU AI Act

The EU AI Act, set to take full effect on August 2, 2026, will impose strict regulations on businesses using AI systems, requiring them to identify, monitor, and classify their AI operations...