Ofcom’s AI Investigations: Addressing Online Safety Challenges

Ofcom’s Investigations into AI Platforms: The Online Safety Act’s Framework

Since the Online Safety Act (OSA) came into force less than a year ago, Ofcom has launched investigations into more than 90 platforms and issued six fines for non-compliance, including penalties against an AI nudification site for failing to have robust age checks in place. Recently, Ofcom has opened two new investigations into generative AI services: X (in relation to its Grok AI chatbot) and Novi Ltd (in relation to its Joi.com service). Both investigations concern alleged failures to comply with duties under the OSA, illustrating a notable shift in the regulator’s enforcement focus towards AI-powered platforms.

How AI Chatbots Fall Under the Online Safety Act

As the use of chatbot technology increases, so does the potential for harm. Under the OSA, a chatbot that meets the Act’s definitions of a regulated service or forms part of one is covered by the Act’s rules. Crucially, any AI-generated content shared by users on a user-to-user service is classified as user-generated content and is regulated in the same way as content created by humans. This means, for example, that a social media post containing harmful imagery produced by AI is subject to the same regulations as similar content created by a person.

However, some chatbots are not covered by the Act. Chatbots fall outside regulation if they only allow users to interact with the chatbot itself (and no other users), do not search multiple websites or databases when responding to users, and cannot generate pornographic content.

The Investigation into X and Grok

On January 12, 2026, Ofcom opened a formal investigation into X Internet Unlimited Company following reports that the Grok AI chatbot was being used to generate and share deeply concerning content, including alleged non-consensual intimate images and child sexual abuse material. The investigation into X focuses on several core provisions under the OSA:

  • Illegal Content Risk Assessments (Sections 9 and 10): Regulated services must carry out a suitable and sufficient illegal content risk assessment and must conduct an updated assessment before making any significant changes to their service. Ofcom is examining whether X failed to assess the risk of users encountering illegal content before introducing or modifying the Grok feature.
  • Illegal Content Safety Duties (Section 11): Services must take proportionate measures to prevent individuals from encountering priority illegal content, including intimate image abuse and child sexual abuse material, and must implement systems designed to minimize the length of time such content is present and swiftly take it down when made aware of it. Ofcom is also examining whether X had regard to protecting users from breaches of privacy laws.
  • Protection of Children (Sections 12, 20, and 21): Where a service is likely to be accessed by children, providers must carry out a suitable and sufficient children’s risk assessment and use proportionate systems, including highly effective age assurance, to prevent children from encountering primary priority content such as pornography. Ofcom is examining whether X failed to implement adequate age assurance measures.
  • Duties about Freedom of Expression and Privacy (Section 22): When deciding on and implementing safety measures and policies, regulated services must have particular regard to protecting users from breaches of any statutory provision or rule of law concerning privacy. Ofcom is examining whether X had regard to protecting users from breaches of privacy laws, given the nature of the content allegedly generated by the Grok chatbot.

Ofcom has confirmed that X has since implemented measures to prevent the Grok account from being used to create intimate images of people. However, the investigation remains ongoing to determine what went wrong and what further remedial steps are being taken.

The Investigation into Novi Ltd

On January 15, 2026, Ofcom announced a separate investigation into Novi Ltd regarding its generative AI service, Joi.com. This investigation forms part of Ofcom’s broader enforcement program into age assurance measures across the adult content sector.

  • Children’s Access Assessments (Section 36): Providers must carry out and retain a written record of a children’s access assessment to determine whether the service is likely to be accessed by children. The investigation into Novi Ltd is examining potential failures to comply with this duty.
  • Protection of Children (Section 12): The investigation is also examining whether Novi Ltd has failed to implement highly effective age assurance measures to prevent children from encountering pornographic content on its service.

A Turning Point?

The Grok incident has prompted widespread calls for stronger legal protections, with members of the UK government describing themselves as “deeply alarmed” and victims criticizing governments for moving too slowly. Courts have begun to recognize the severity of such harms: in a landmark 2023 ruling, a judge held that the impact of image-based abuse on victims is akin to that of other kinds of abuse, leading to amendments in the Judicial College Guidelines to include image-based abuse within the definition of “abuse” for the first time.

However, many argue that a fundamental change in approach is needed: if regulation focuses only on cleaning up harm after it has occurred, it will always lag behind technology. Preventing AI-enabled abuse requires acting earlier on system design, company responsibility, and structural safeguards. This raises a critical question: is the Online Safety Act, primarily designed with traditional user-generated content in mind, truly fit for purpose in addressing the distinct challenges of AI-generated abuse, or is bespoke legislation now required?

Looking Ahead

These investigations demonstrate that, for now, Ofcom is applying the OSA with equal force to AI-powered services as to traditional platforms. Ofcom has stated it will not hesitate to investigate where there is suspicion that companies are failing in their duties, especially where there is a risk of harm to children. Providers of generative AI services operating in the UK should ensure that their risk assessments, content moderation systems, and age assurance measures meet the standards required under the Act. However, as calls for systemic change grow louder, both regulators and industry should be prepared for the possibility that more targeted AI-specific legislation may follow.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...