AI Chatbots Face New UK Safety Regulations Following Grok Scandal

Starmer to Extend Online Safety Rules to AI Chatbots After Grok Scandal

Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday.

Emboldened by Elon Musk’s X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a “crackdown on vile illegal content created by AI”.

The Need for Change

With more and more children using chatbots for everything from help with their homework to mental health support, the government indicated it would “move fast to shut a legal loophole” and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law.

Starmer is also planning to accelerate new restrictions on social media use by children if they are agreed by MPs after a public consultation into a possible under-16 ban. This means that any changes to children’s use of social media, which may include other measures such as restricting infinite scrolling, could happen as soon as this summer.

Political Reactions

However, the Conservatives dismissed the government’s claim to be acting quickly as “more smoke and mirrors” given the consultation has not yet started. Shadow education secretary Laura Trott argued that claiming immediate action is not credible when the so-called urgent consultation does not even exist.

Current Regulatory Landscape

The moves come after the online regulator Ofcom admitted it lacked powers to act against Grok because images and videos created by a chatbot without searching the internet are not within the scope of existing laws, unless they amount to pornography. The change to bring AI chatbots under the Online Safety Act could happen within weeks, despite this loophole being known for over two years.

“Technology is moving really fast, and the law has got to keep up,” said Starmer. “The action we took on Grok sent a clear message that no platform gets a free pass. Today we are closing loopholes that put children at risk, and laying the groundwork for further action.”

Potential Consequences for Non-Compliance

Companies that breach the Online Safety Act can face punishments of up to 10% of global revenue, and regulators can apply to courts to block their connection in the UK. Currently, if AI chatbots are used specifically as search engines, to produce pornography, or operate in user-to-user contexts, they are already covered by the act. However, they can be used to create material that encourages self-harm or generates child sexual abuse material without facing sanction, highlighting the loophole the government aims to close.

Concerns from Child Protection Organizations

Chris Sherwood, the chief executive of the NSPCC, stated that young people have contacted its helpline reporting harms caused by AI chatbots and expressed distrust in tech companies’ ability to design them safely. One notable case involved a 14-year-old girl who received inaccurate information from an AI chatbot regarding her eating habits and body dysmorphia, raising significant concerns about the impact of such technologies.

“Social media has produced huge benefits for young people, but lots of harm,” Sherwood remarked. “AI is going to be that on steroids if we’re not careful.”

Steps Taken by Major Companies

OpenAI, the $500bn startup behind ChatGPT, has reacted to concerns following the tragic suicide of a 16-year-old, allegedly influenced by interactions with ChatGPT. The company has launched parental controls and is rolling out age-prediction technology to restrict access to potentially harmful content.

Government Actions Moving Forward

The government is also set to consult on measures to make it impossible for social media platforms to facilitate the sending and receiving of nude images of children, a practice that is already illegal. Technology Secretary Liz Kendall stated, “We will not wait to take the action families need, so we will tighten the rules on AI chatbots”.

The Molly Rose Foundation, established by the father of 14-year-old Molly Russell, who took her own life after viewing harmful online content, welcomed these steps as a “welcome downpayment” but urged for a new Online Safety Act that strengthens regulation and prioritizes product safety and children’s wellbeing.

In the UK, support for children can be accessed through the NSPCC at 0800 1111, and adults concerned about a child can call 0808 800 5000.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...