Why AI Safety Takes Centre Stage on Safer Internet Day 2026
As the world marks Safer Internet Day on February 10, AI is unsurprisingly taking centre stage. In the UK, the Safer Internet Centre has chosen the theme ‘Smart tech, safe choices – Exploring the safe and responsible use of AI’ for 2026. This emphasis reflects growing concern over how AI systems are deployed, secured, and misused across digital platforms.
In the US, ConnectSafely highlights generative AI, media literacy, and critical thinking, aligning with its global theme, ‘Together for a Better Internet’. For cybersecurity professionals, these themes underline a shared challenge: AI is increasingly embedded in digital infrastructure while simultaneously expanding the attack surface.
AI Regulation and Online Harm
In the UK, new legislation has begun to address some of the most visible AI-enabled security risks. As of February 6, it became illegal to request or create AI-generated deepfakes without a person’s consent. In the US, federal policy emphasizes innovation and strategic advantage, often balancing security concerns against economic priorities. The “Take It Down” Act, passed in May 2025, became the first nationwide law to directly target the publication of AI-generated deepfakes and other non-consensual intimate imagery. Additionally, 46 states have criminalized the creation and distribution of such material with intent.
For organizations operating internationally, this fragmented legal environment complicates compliance, particularly regarding identity abuse, impersonation fraud, and synthetic media.
AI as Threat Amplifier
According to BCG’s AI Radar 2026, 65% of chief executives rank accelerating AI adoption among their top three priorities for 2026. However, this rapid uptake has created new security exposure. In response, 38 US states have enacted close to 100 AI-related measures. One of the most recent, passed in Texas and effective from January, regulates certain commercial uses of AI systems, reflecting increased scrutiny of data handling, automated decision-making, and AI-driven risk.
Matt Cooke, EMEA Cybersecurity Strategist at Proofpoint, describes the tension between innovation and exposure: “While Gen AI unlocks exciting opportunities, it also presents new dangers, including deepfakes, misinformation, and data privacy vulnerabilities. That’s why a human-centric approach to online safety matters so much – because your online life is your real life. Screens don’t make things disappear; screenshots are forever, and the internet remembers.”
Kamran Ikram, Accenture’s Cybersecurity Lead in the UK & Ireland, adds another layer of concern: “Safer Internet Day is a timely reminder that cyber risk today is less about hacking systems and more about exploiting human behaviour. The growing threat comes from AI-driven social engineering, where attackers target trust instead of technical flaws.”
Research from Accenture reveals a workforce that feels cyber confident while being undertrained. “Four in five employees believe they could spot a phishing attempt or AI-driven cyberattack at work, yet more than a third of UK workers have never received cybersecurity training,” Kamran adds. “Organizations need to build resilience across every part of their operations and supply chains, which means ongoing education on cyber threats and clear expectations around verification. In an AI-driven threat landscape, businesses can’t rely on patchy preparedness when attackers are advancing by the day.”
Building Security into AI Systems
Alongside legislation, both the UK Safer Internet Centre and the US’ ConnectSafely have published guidance to reduce risk for users, parents, and professionals working with vulnerable groups. These resources increasingly emphasize system design, trust, and verification rather than constant monitoring.
Paul Holt, Group Vice President of EMEA at Digicert, frames online safety as a structural security issue: “As a parent, I have learned that safety in the modern world is no longer about watching everything. It is about putting the right systems in place when oversight is no longer possible. Safer Internet Day is a reminder that in a machine-led Internet, trust has to be proven every time, or it will fail at scale.”
In the UK, AI regulation remains largely sector-specific. While an AI bill announced in the King’s Speech in July 2025 is expected to regulate only the most powerful models, a broader framework is not anticipated until at least May 2026. For cybersecurity leaders, the lesson is clear: as AI becomes embedded across digital services, security, governance, and human vigilance must evolve at the same pace, or AI-driven risk will scale faster than the defenses designed to contain it.