← All Articles & Guides
Online SafetyFor ParentsFor Professionals

Safer Internet Day 2026: AI, Smart Choices and What Schools and Families Should Do Right Now

This year's Safer Internet Day focused on children's use of AI — and the risks it creates. Here's what the research shows, what schools should be doing, and how parents can have the conversation today.

✍️ By The Safeguard Hub Team 📅 April 2026 · Last reviewed April 2026 ⏱ 11 min read Part of The Safeguard Hub Articles Series
Safer Internet Day 2026 children and AI

Safer Internet Day 2026 took place on 10 February 2026, with the theme: "Smart tech, safe choices — Exploring the safe and responsible use of AI."[1] Coordinated by the UK Safer Internet Centre and celebrated in approximately 170 countries worldwide, the day marked a significant shift in focus for internet safety education — from the well-trodden risks of social media and stranger contact, to the fast-emerging and poorly understood world of artificial intelligence.

For schools and families, this shift matters. AI tools — chatbots, image generators, deepfake applications, AI tutors — are now embedded in children's daily lives. The safeguarding risks are real and evolving, and many parents and educators feel underprepared to discuss them.

What the Research Shows: Children, AI and Risk

Research by Nominet and Childnet, surveying 2,000 parents and 2,000 children aged 8–17, found significant gaps between how children use AI and how adults understand that use.[2] Key findings include:

  • The majority of young people aged 12–17 are using AI tools regularly — including AI chatbots, AI image generators, and AI-powered social media recommendation algorithms.
  • Many children do not identify AI-generated content as "AI" — they encounter it without recognition in social feeds, video content and messaging apps.
  • Parents significantly underestimate how frequently their children encounter AI — and feel underprepared to discuss its risks or benefits.
  • The fastest-growing safeguarding risk area linked to AI tools is AI-generated child sexual abuse material (CSAM) — a development formally identified as a national threat by the IWF and NCA in their 2025 assessments.[3]

AI Risks in the Safeguarding Context

AI introduces several safeguarding risks that did not exist — or existed in much more limited form — five years ago:

AI-generated CSAM

Offenders are using AI tools to generate indecent images of children without involving a real child. The Internet Watch Foundation (IWF) identified 291,273 pages containing indecent images in 2024 — a 6% year-on-year increase — with AI-generated content representing a growing and increasingly indistinguishable proportion.[3] This is illegal under the Protection of Children Act 1978.

Deepfake image abuse

AI tools are being used by peers and online contacts to create fake intimate images of real children — so-called "nudification" apps. This is a form of online sexual abuse and may constitute criminal conduct under the Online Safety Act 2023.

AI-assisted grooming

Offenders are using AI to generate more convincing, personalised grooming messages at scale — making it harder for children to recognise manipulation. AI chatbots can also be used to simulate relationships with children before requests escalate.

Misinformation and manipulation

AI-generated text and video content makes it increasingly difficult for children to distinguish authentic information from fabrication — with implications for radicalisation, health misinformation, and financial fraud targeting young people.

What Schools Should Do: An Action Checklist

  • Update your online safety policy to explicitly reference AI tools, AI-generated content, and deepfakes — your existing policy likely predates these risks.
  • Deliver a whole-school assembly or form-time session on what AI is, how it can be misused, and what to do if a student encounters AI-generated harmful content.
  • Train all staff on recognising AI-related safeguarding disclosures — students may not use the language "AI" when describing incidents.
  • Update filtering and monitoring software to include known AI tool domains, where appropriate.
  • Brief parents — most parents are significantly less informed about AI risks than their children are about AI tools. A brief parent newsletter or session can be highly effective.

What Parents Can Do Today

  • Ask your child: "Have you used any AI tools — like ChatGPT, image generators or voice cloners?" Open the conversation without alarm.
  • Check the apps on your child's device for AI tools — many are embedded in existing platforms (TikTok, Snapchat, Instagram) and not immediately obvious.
  • Talk about what is real and what is generated — help your child develop the critical instinct to question online content.
  • Report any AI-generated harmful images involving your child to the IWF at report.iwf.org.uk or CEOP at ceop.police.uk.

Report Online Harm

IWF (report CSAM): report.iwf.org.uk
CEOP (child sexual exploitation): ceop.police.uk
UK Safer Internet Centre: saferinternet.org.uk
NSPCC Helpline: 0808 800 5000

Citations

[1] UK Safer Internet Centre (2026). Safer Internet Day 2026 — Smart tech, safe choices. saferinternet.org.uk.

[2] Nominet / Childnet (2026). Children, AI and internet safety: Survey of 2,000 parents and 2,000 children aged 8–17.

[3] Internet Watch Foundation / NCA (2025). Annual Report and National Strategic Assessment 2024/25.

Share this article: 𝕏 X / Twitter f Facebook in LinkedIn 📱 WhatsApp

Related Resources

Online Grooming Hub →Parents' Corner →Professional Portal → All Articles →