Brits Demand Stricter AI Regulations Amid Safety Concerns

Public Concerns Over Advanced AI in the UK

As the race to develop more powerful artificial intelligence systems accelerates in Silicon Valley, public opinion in the UK presents a stark contrast. A recent poll reveals that the majority of the British population harbors significant skepticism regarding the influence of tech CEOs on AI regulation and expresses deep concerns about the safety of emerging AI technologies.

Key Findings from the Poll

According to the poll shared exclusively with TIME, an overwhelming 87% of British respondents support legislation that mandates AI developers to demonstrate the safety of their systems before they are released to the public. Moreover, 60% of those surveyed advocate for a ban on the development of “smarter-than-human” AI models.

Only 9% of respondents expressed trust in tech CEOs to prioritize public interest when discussing AI regulation. This indicates a growing disconnect between the public’s anxiety and the regulatory actions currently in place.

Growing Public Anxiety

The survey results reflect escalating public fears about the consequences of developing AI systems that could outperform humans in various tasks. Although such technology does not yet exist, many prominent AI companies, including OpenAI, Google, Anthropic, and Meta, are actively pursuing this goal. Some tech leaders predict that these advanced systems could become a reality within a few years.

In light of these developments, 75% of surveyed Britons believe that laws should explicitly prohibit the creation of AI systems capable of escaping their environments. Furthermore, 63% support measures to prevent the development of AI systems that can autonomously enhance their intelligence or capabilities.

Regulatory Challenges in the UK

The findings from the UK poll resonate with similar surveys conducted in the United States, highlighting a significant gap between public sentiment and existing regulatory frameworks. The European Union’s AI Act, which is considered the most extensive AI legislation globally, does not adequately address the risks associated with AI systems that could match or exceed human capabilities.

Currently, the UK lacks a comprehensive regulatory framework for AI. Despite the ruling Labour Party’s commitment to introduce new AI regulations ahead of the 2024 general election, progress has stalled. British Prime Minister Keir Starmer recently indicated a shift in focus towards integrating AI into the economy rather than emphasizing regulation.

Andrea Miotti, the executive director of Control AI, expressed concern over this shift, stating, “It seems like they’re sidelining their promises at the moment, for the shiny attraction of growth.” Miotti emphasizes that the public is clear about its desires for regulation and safety in AI development.

A Call for New Legislation

Accompanying the poll results was a statement signed by 16 British lawmakers from both major political parties. This statement urged the government to enact new laws specifically targeting superintelligent AI systems—those that could surpass human intelligence.

The lawmakers asserted, “Specialized AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.” They believe that the UK can harness the benefits of AI while mitigating risks through binding regulations on the most powerful systems.

Miotti argues that the UK does not need to sacrifice economic growth by imposing broad regulations similar to those in the EU. Instead, he advocates for “narrow, targeted, surgical AI regulation” that would focus specifically on high-risk AI models.

Public Support for Regulatory Institutions

The polling data further indicates that 74% of Brits support the Labour Party’s pledge to establish the AI Safety Institute (AISI) as a regulatory authority. Although the AISI currently conducts tests on private AI models prior to their release, it lacks the power to enforce changes or prohibit the release of dangerous models.

As discussions around AI regulation continue, it is clear that the British public is eager for decisive action to ensure that advancements in technology do not outpace the necessary safeguards to protect society.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...