Starmer to Extend Online Safety Rules to AI Chatbots After Grok Scandal
Makers of AI chatbots that put children at risk will face massive fines or even see their services blocked in the UK under law changes to be announced by Keir Starmer on Monday.
Emboldened by Elon Musk’s X stopping its Grok AI tool from creating sexualised images of real people in the UK after public outrage last month, ministers are planning a “crackdown on vile illegal content created by AI”.
The Need for Change
With more and more children using chatbots for everything from help with their homework to mental health support, the government indicated it would “move fast to shut a legal loophole” and force all AI chatbot providers to abide by illegal content duties in the Online Safety Act or face the consequences of breaking the law.
Starmer is also planning to accelerate new restrictions on social media use by children if they are agreed by MPs after a public consultation into a possible under-16 ban. This means that any changes to children’s use of social media, which may include other measures such as restricting infinite scrolling, could happen as soon as this summer.
Political Reactions
However, the Conservatives dismissed the government’s claim to be acting quickly as “more smoke and mirrors” given the consultation has not yet started. Shadow education secretary Laura Trott argued that claiming immediate action is not credible when the so-called urgent consultation does not even exist.
Current Regulatory Landscape
The moves come after the online regulator Ofcom admitted it lacked powers to act against Grok because images and videos created by a chatbot without searching the internet are not within the scope of existing laws, unless they amount to pornography. The change to bring AI chatbots under the Online Safety Act could happen within weeks, despite this loophole being known for over two years.
“Technology is moving really fast, and the law has got to keep up,” said Starmer. “The action we took on Grok sent a clear message that no platform gets a free pass. Today we are closing loopholes that put children at risk, and laying the groundwork for further action.”
Potential Consequences for Non-Compliance
Companies that breach the Online Safety Act can face punishments of up to 10% of global revenue, and regulators can apply to courts to block their connection in the UK. Currently, if AI chatbots are used specifically as search engines, to produce pornography, or operate in user-to-user contexts, they are already covered by the act. However, they can be used to create material that encourages self-harm or generates child sexual abuse material without facing sanction, highlighting the loophole the government aims to close.
Concerns from Child Protection Organizations
Chris Sherwood, the chief executive of the NSPCC, stated that young people have contacted its helpline reporting harms caused by AI chatbots and expressed distrust in tech companies’ ability to design them safely. One notable case involved a 14-year-old girl who received inaccurate information from an AI chatbot regarding her eating habits and body dysmorphia, raising significant concerns about the impact of such technologies.
“Social media has produced huge benefits for young people, but lots of harm,” Sherwood remarked. “AI is going to be that on steroids if we’re not careful.”
Steps Taken by Major Companies
OpenAI, the $500bn startup behind ChatGPT, has reacted to concerns following the tragic suicide of a 16-year-old, allegedly influenced by interactions with ChatGPT. The company has launched parental controls and is rolling out age-prediction technology to restrict access to potentially harmful content.
Government Actions Moving Forward
The government is also set to consult on measures to make it impossible for social media platforms to facilitate the sending and receiving of nude images of children, a practice that is already illegal. Technology Secretary Liz Kendall stated, “We will not wait to take the action families need, so we will tighten the rules on AI chatbots”.
The Molly Rose Foundation, established by the father of 14-year-old Molly Russell, who took her own life after viewing harmful online content, welcomed these steps as a “welcome downpayment” but urged for a new Online Safety Act that strengthens regulation and prioritizes product safety and children’s wellbeing.
In the UK, support for children can be accessed through the NSPCC at 0800 1111, and adults concerned about a child can call 0808 800 5000.