AI: ‘Schools Must Act Now on Deepfakes and Therapy Chatbots’
In 2025, AI companies such as OpenAI faced numerous legal challenges, including cases filed by parents claiming these tools contributed to the tragic suicides of their teenage children. Despite these serious allegations, AI companies continue to advance their offerings, promoting the idea of “giving people what they want.” This has led to features such as ChatGPT’s “Cameo” in Sora 2, allowing users to insert digital likenesses of individuals into artificial scenes, and Elon Musk’s Grok, which can generate explicit and highly realistic images and videos.
As governments scramble to understand and regulate these technologies, schools find themselves in a precarious position, needing to act swiftly.
The Promises and Perils of AI in Education
While many believe that AI has significant potential to enhance learning by promoting critical thinking and providing individualized support, there is a pressing need to address the new safeguarding risks associated with these technologies.
At educational institutions, there is an urgent focus on two specific issues:
Deepfakes: A New Form of Victimization
The capability to create realistic fake images and videos is not new, but the speed and accessibility of these tools have drastically changed the landscape. Children, lacking technical expertise and expensive software, can now produce convincing deepfake videos with just a click. This has turned deepfakes into a form of entertainment on social media, often shared among peers with little understanding of the consequences.
Governments worldwide are moving to enforce existing laws and introduce new legislation to protect victims of deepfakes. However, students often remain unaware of the legal implications. For instance, in Australia, legislation is proposed that could impose prison sentences of up to six years for resharing such content.
With children’s underdeveloped prefrontal cortexes, they are biologically inclined to react quickly without rationalizing the consequences of their actions. The ability to create and disseminate harmful material from the comfort of their bedrooms poses a significant risk.
Chatbots as Therapists: A Double-Edged Sword
Initially, many educators found the notion of relying on AI chatbots for mental health support laughable. However, a recent Mental Health UK study reveals that one in three adults has turned to chatbots for mental health assistance, a figure that is likely even higher among teenagers.
While these tools are available 24/7 and offer a comforting simulated presence, they cannot take action in response to disclosures of serious issues. The absence of human intervention means that vulnerable adolescents may find themselves without protection when relying solely on chatbots.
As AI chatbots simulate care and concern without the ability to report safeguarding disclosures, they maintain silence on critical issues, leaving children exposed.
The Urgent Need for School Response
Educational institutions must not wait for government regulations or expect profit-driven AI companies to self-regulate. Instead, they should engage students in meaningful conversations about the implications of these technologies. This includes updating safeguarding policies and delivering messages through various channels.
At many schools, proactive measures include creating impactful videos, revising PSHE lessons, holding assemblies, and inviting parents for discussions. Failing to address these issues means that children may become both the experiment and the scapegoat for adult failures.
In conclusion, as AI technologies evolve rapidly, schools must act immediately to ensure that students are equipped with the knowledge and skills to navigate these challenges responsibly.