This year's Safer Internet Day focused on children's use of AI — and the risks it creates. Here's what the research shows, what schools should be doing, and how parents can have the conversation today.
Safer Internet Day 2026 took place on 10 February 2026, with the theme: "Smart tech, safe choices — Exploring the safe and responsible use of AI."[1] Coordinated by the UK Safer Internet Centre and celebrated in approximately 170 countries worldwide, the day marked a significant shift in focus for internet safety education — from the well-trodden risks of social media and stranger contact, to the fast-emerging and poorly understood world of artificial intelligence.
For schools and families, this shift matters. AI tools — chatbots, image generators, deepfake applications, AI tutors — are now embedded in children's daily lives. The safeguarding risks are real and evolving, and many parents and educators feel underprepared to discuss them.
Research by Nominet and Childnet, surveying 2,000 parents and 2,000 children aged 8–17, found significant gaps between how children use AI and how adults understand that use.[2] Key findings include:
AI introduces several safeguarding risks that did not exist — or existed in much more limited form — five years ago:
Offenders are using AI tools to generate indecent images of children without involving a real child. The Internet Watch Foundation (IWF) identified 291,273 pages containing indecent images in 2024 — a 6% year-on-year increase — with AI-generated content representing a growing and increasingly indistinguishable proportion.[3] This is illegal under the Protection of Children Act 1978.
AI tools are being used by peers and online contacts to create fake intimate images of real children — so-called "nudification" apps. This is a form of online sexual abuse and may constitute criminal conduct under the Online Safety Act 2023.
Offenders are using AI to generate more convincing, personalised grooming messages at scale — making it harder for children to recognise manipulation. AI chatbots can also be used to simulate relationships with children before requests escalate.
AI-generated text and video content makes it increasingly difficult for children to distinguish authentic information from fabrication — with implications for radicalisation, health misinformation, and financial fraud targeting young people.
Citations
[1] UK Safer Internet Centre (2026). Safer Internet Day 2026 — Smart tech, safe choices. saferinternet.org.uk.
[2] Nominet / Childnet (2026). Children, AI and internet safety: Survey of 2,000 parents and 2,000 children aged 8–17.
[3] Internet Watch Foundation / NCA (2025). Annual Report and National Strategic Assessment 2024/25.