AI Safety: Key Focus for Safer Internet Day 2026

Why AI Safety Takes Centre Stage on Safer Internet Day 2026

As the world marks Safer Internet Day on February 10, AI is unsurprisingly taking centre stage. In the UK, the Safer Internet Centre has chosen the theme ‘Smart tech, safe choices – Exploring the safe and responsible use of AI’ for 2026. This emphasis reflects growing concern over how AI systems are deployed, secured, and misused across digital platforms.

In the US, ConnectSafely highlights generative AI, media literacy, and critical thinking, aligning with its global theme, ‘Together for a Better Internet’. For cybersecurity professionals, these themes underline a shared challenge: AI is increasingly embedded in digital infrastructure while simultaneously expanding the attack surface.

AI Regulation and Online Harm

In the UK, new legislation has begun to address some of the most visible AI-enabled security risks. As of February 6, it became illegal to request or create AI-generated deepfakes without a person’s consent. In the US, federal policy emphasizes innovation and strategic advantage, often balancing security concerns against economic priorities. The “Take It Down” Act, passed in May 2025, became the first nationwide law to directly target the publication of AI-generated deepfakes and other non-consensual intimate imagery. Additionally, 46 states have criminalized the creation and distribution of such material with intent.

For organizations operating internationally, this fragmented legal environment complicates compliance, particularly regarding identity abuse, impersonation fraud, and synthetic media.

AI as Threat Amplifier

According to BCG’s AI Radar 2026, 65% of chief executives rank accelerating AI adoption among their top three priorities for 2026. However, this rapid uptake has created new security exposure. In response, 38 US states have enacted close to 100 AI-related measures. One of the most recent, passed in Texas and effective from January, regulates certain commercial uses of AI systems, reflecting increased scrutiny of data handling, automated decision-making, and AI-driven risk.

Matt Cooke, EMEA Cybersecurity Strategist at Proofpoint, describes the tension between innovation and exposure: “While Gen AI unlocks exciting opportunities, it also presents new dangers, including deepfakes, misinformation, and data privacy vulnerabilities. That’s why a human-centric approach to online safety matters so much – because your online life is your real life. Screens don’t make things disappear; screenshots are forever, and the internet remembers.”

Kamran Ikram, Accenture’s Cybersecurity Lead in the UK & Ireland, adds another layer of concern: “Safer Internet Day is a timely reminder that cyber risk today is less about hacking systems and more about exploiting human behaviour. The growing threat comes from AI-driven social engineering, where attackers target trust instead of technical flaws.”

Research from Accenture reveals a workforce that feels cyber confident while being undertrained. “Four in five employees believe they could spot a phishing attempt or AI-driven cyberattack at work, yet more than a third of UK workers have never received cybersecurity training,” Kamran adds. “Organizations need to build resilience across every part of their operations and supply chains, which means ongoing education on cyber threats and clear expectations around verification. In an AI-driven threat landscape, businesses can’t rely on patchy preparedness when attackers are advancing by the day.”

Building Security into AI Systems

Alongside legislation, both the UK Safer Internet Centre and the US’ ConnectSafely have published guidance to reduce risk for users, parents, and professionals working with vulnerable groups. These resources increasingly emphasize system design, trust, and verification rather than constant monitoring.

Paul Holt, Group Vice President of EMEA at Digicert, frames online safety as a structural security issue: “As a parent, I have learned that safety in the modern world is no longer about watching everything. It is about putting the right systems in place when oversight is no longer possible. Safer Internet Day is a reminder that in a machine-led Internet, trust has to be proven every time, or it will fail at scale.”

In the UK, AI regulation remains largely sector-specific. While an AI bill announced in the King’s Speech in July 2025 is expected to regulate only the most powerful models, a broader framework is not anticipated until at least May 2026. For cybersecurity leaders, the lesson is clear: as AI becomes embedded across digital services, security, governance, and human vigilance must evolve at the same pace, or AI-driven risk will scale faster than the defenses designed to contain it.

More Insights

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Revolutionizing Drone Regulations: The EU AI Act Explained

The EU AI Act represents a significant regulatory framework that aims to address the challenges posed by artificial intelligence technologies in various sectors, including the burgeoning field of...

Embracing Responsible AI to Mitigate Legal Risks

Businesses must prioritize responsible AI as a frontline defense against legal, financial, and reputational risks, particularly in understanding data lineage. Ignoring these responsibilities could...

AI Governance: Addressing the Shadow IT Challenge

AI tools are rapidly transforming workplace operations, but much of their adoption is happening without proper oversight, leading to the rise of shadow AI as a security concern. Organizations need to...

EU Delays AI Act Implementation to 2027 Amid Industry Pressure

The EU plans to delay the enforcement of high-risk duties in the AI Act until late 2027, allowing companies more time to comply with the regulations. However, this move has drawn criticism from rights...

White House Challenges GAIN AI Act Amid Nvidia Export Controversy

The White House is pushing back against the bipartisan GAIN AI Act, which aims to prioritize U.S. companies in acquiring advanced AI chips. This resistance reflects a strategic decision to maintain...

Experts Warn of EU AI Act’s Impact on Medtech Innovation

Experts at the 2025 European Digital Technology and Software conference expressed concerns that the EU AI Act could hinder the launch of new medtech products in the European market. They emphasized...

Ethical AI: Transforming Compliance into Innovation

Enterprises are racing to innovate with artificial intelligence, often without the proper compliance measures in place. By embedding privacy and ethics into the development lifecycle, organizations...

AI Hiring Compliance Risks Uncovered

Artificial intelligence is reshaping recruitment, with the percentage of HR leaders using generative AI increasing from 19% to 61% between 2023 and 2025. However, this efficiency comes with legal...