Kendall Urges Action Against Grok’s AI-Generated Sexual Deepfakes
In a significant move, Technology Secretary Liz Kendall has called upon Elon Musk’s social media platform, X, to take decisive measures against the misuse of its artificial intelligence chatbot, Grok, in the production of non-consensual sexualized images of women and girls.
This demand arises amidst growing concerns that Grok could be exploited to generate sexualized imagery without consent, intensifying scrutiny of how major platforms manage harmful and illegal content associated with generative AI tools.
The Legal Framework: UK Online Safety Act 2023
Alexander Brown, Head of Technology Media and Telecoms at Simmons & Simmons, has highlighted the implications of the UK Online Safety Act 2023, which addresses the sharing of intimate images, including certain AI-generated deepfakes. According to the Act:
- The sharing of intimate images, including AI-generated deepfakes that “appear to show” someone in an intimate state, is deemed a criminal offence.
- Companies are mandated to implement robust measures against illegal content and activity.
- Platforms like X are required to take proactive steps to mitigate the risk of their services being used for illegal purposes.
The Act designates the sharing of intimate images without consent as a priority offence, compelling X to act swiftly to prevent such content from being hosted on its platform.
Enforcement and Penalties
The Ofcom regulator enforces this legislation, with potential fines reaching up to £18 million or 10 percent of qualifying worldwide revenue, whichever is greater. In extreme cases, Ofcom may pursue court orders that can disrupt business operations, such as requiring payment providers to withdraw services or ISPs to block access to the site in the UK.
Concerns Over X’s Response
Reports have surfaced indicating that complainants have voiced concerns regarding X’s response to flagged images. There are allegations that the platform may not have acted sufficiently to remove content that users reported. Ofcom is expected to examine whether X took necessary preventive measures before harmful material surfaced.
Broader Implications for Generative AI
Kendall’s intervention aligns with an ongoing policy discussion surrounding the effects of generative AI tools on online safety. Lawmakers and regulators are increasingly focusing on deepfake technology and the rapidity with which such content can be created and disseminated across vast networks.
The issues surrounding Grok also raise critical questions about how platforms manage AI features integrated within consumer services. As companies continue to launch chatbots and image generation tools at an accelerated pace, regulators have made it clear that existing safety and content laws remain applicable when these tools are employed for illegal activities.
Conclusion
The Online Safety Act sets clear expectations for platforms to act proactively and swiftly in response to priority offences. As the situation unfolds, Ofcom will scrutinize the measures that X has implemented both before and after the emergence of harmful material.