Quick Exit
🤖 AI & Technology ● KCSIE 2025 ✓ MASH Compliant Updated May 2026

Generative AI & Safeguarding: A Guide for Schools

AI-generated content presents new and rapidly evolving safeguarding risks. This guide covers what AI-generated harm looks like, what KCSIE 2025 requires of schools, and exactly what DSLs need to do — with a practical compliance checklist.

CSAM
AI-generated CSAM is a criminal offence under UK law
Deepfakes
AI deepfake nude images now illegal under the Online Safety Act 2023
Para 143
KCSIE 2025 explicitly addresses generative AI
DfE
Published Generative AI product safety expectations

On This Page

!

Types of AI-Generated Harm

Generative AI tools are now accessible to any young person with a smartphone. Schools need to understand the specific harms that can occur, both by pupils and to pupils.

🕺

AI-Generated Intimate Images (Deepfakes)

High Risk Peer-on-Peer Image-Based Abuse

AI "nudification" tools can generate realistic intimate images of real people — including fellow pupils and staff — from fully-clothed photos. Sharing such images is now a criminal offence under the Online Safety Act 2023. Creating them may also be criminal where the victim is a child.

Legal position: Under the Online Safety Act 2023, sharing an intimate AI-generated image without consent is a criminal offence regardless of whether the victim is real. Where the depicted person is under 18, this overlaps with child sexual abuse material (CSAM) offences under the Sexual Offences Act 2003. Schools must treat this as a child protection matter and make a CEOP/police referral.

🎥

AI-Generated Child Sexual Abuse Material (CSAM)

Serious Crime Police Referral Required

AI-generated CSAM is illegal under the Protection of Children Act 1978 regardless of whether a real child was depicted. The Internet Watch Foundation reports a significant rise in AI-generated CSAM since 2023. Any discovery of such material on school devices must be immediately referred to police. Do not view, copy, or distribute.

🎤

AI Voice Cloning and Scams

Emerging Risk Exploitation

AI tools can clone a person's voice from a short recording and generate convincing fake audio. Young people are at risk of being manipulated by AI voice clones purporting to be friends, parents, or authority figures — used in sextortion, financial scams, and coercion. Schools should ensure pupils know to verify unexpected audio requests through a separate channel.

🌐

AI Misinformation and Conspiracy Content

KCSIE 2025 Radicalisation Risk

AI-generated news articles, fake videos, and convincing disinformation now spread rapidly on social media. KCSIE 2025 explicitly recognises misinformation and conspiracy theories (often AI-amplified) as a safeguarding harm. Schools must equip pupils with critical thinking skills to identify and challenge false content.

🤖

AI-Assisted Grooming and Exploitation

High Risk Online Grooming

Offenders increasingly use AI chatbots to groom children at scale — generating personalised messages, maintaining fake personas, and creating emotional manipulation narratives. The National Crime Agency (NCA) has confirmed AI is lowering the barrier for online child sexual exploitation. Warning signs remain the same as traditional grooming.

📚

KCSIE 2025 Requirements

KCSIE 2025 Para 143 is the key reference for generative AI. It adds explicit reference to the DfE's Generative AI: product safety expectations under the filtering and monitoring section. It also strengthens the requirement for the DPO to be involved in AI governance.

What Para 143 Requires

  • Schools must consider the DfE's Generative AI product safety expectations when assessing AI tools used with pupils
  • The DPO must be embedded in AI risk governance
  • Schools must conduct DPIAs for AI tools involving pupil data
  • Filtering must address AI-generated harmful content, not just traditional categories

The Online Safety Act 2023 and AI

  • Platforms hosting illegal content (including AI-generated CSAM) face criminal liability
  • Sharing AI intimate images without consent: criminal offence (Oct 2023)
  • Creating such images (if the subject is a child): CSAM offence
  • Schools must update their Online Safety Policy to reflect these legal changes
📄

DfE Generative AI Product Safety Expectations

The DfE has published a set of capabilities and features that generative AI products should meet to be considered safe for use in schools. When reviewing any AI tool for use with pupils, check against these expectations:

Age-Appropriate Content Filtering

The AI tool must filter out harmful, sexual, violent, or age-inappropriate content when used with pupils

Data Privacy and Protection

UK GDPR compliant; does not use pupil conversations to train models without consent; Data Processing Agreement in place

Transparency

AI-generated content is clearly labelled as such; the tool does not impersonate a human without disclosure

Safeguards Against Misuse

Cannot be used to generate intimate images, CSAM, hate speech, or radicalisation content

Incident Reporting

Provider has a mechanism for schools to report safety incidents and receives a prompt response

Admin Controls

School administrators can configure access, set restrictions, and review pupil usage logs

👀

Warning Signs for Staff

Potential Victim of AI Harm

  • Distress, withdrawal, or anxiety after being online
  • Reports that intimate or embarrassing images of them are circulating
  • Reluctance to attend school linked to fear of online content being shared
  • Expressions of shame, guilt, or self-blame about images that have been shared
  • Being targeted by peers making comments about images or videos of them

Potential Perpetrator of AI Harm

  • Found to possess or share AI-generated images of real peers
  • Accessing AI nudification or image-manipulation tools on school devices
  • Making threats to create or share intimate images of others
  • Bragging about having or sharing manipulated images of peers

What Schools Must Do — Action Checklist

1

Audit AI Tools in Use

List every AI tool used by pupils — in lessons, for homework, and on personal devices on school networks. Include Microsoft Copilot, Google Gemini, ChatGPT, Canva AI, literacy AI tools, and any subject-specific AI platforms.

2

Assess Each Tool Against DfE Expectations

Use the DfE Generative AI product safety expectations as a checklist. Document your assessment in writing. If a tool does not meet the expectations, either restrict pupil access or require the provider to address the gaps before allowing use.

3

Complete DPIAs with the DPO

A Data Protection Impact Assessment (DPIA) is required for AI tools that process pupil personal data. Complete these with your Data Protection Officer before deployment. The DPO must sign off on the DPIA.

4

Update the Online Safety Policy

Add AI-generated harmful content (deepfakes, nudification tools, AI CSAM) to your Online Safety Policy. Include the legal position under the Online Safety Act 2023. Ensure pupils and parents are aware of the policy.

5

Deliver Pupil Education on AI Safety

Ensure pupils — particularly at KS3 and KS4 — understand: (a) AI-generated intimate images of them can be created and are harmful; (b) creating or sharing such images is illegal; (c) who to tell if they are affected. Online safety lessons must now explicitly address AI.

6

Brief All Staff

In your annual KCSIE update, brief all staff on AI-generated harm — what it is, what the warning signs are, and how to report. Include the legal position, particularly regarding AI intimate images and CSAM.

!

If AI CSAM or Deepfakes Are Discovered on School Devices

Do not view beyond what is necessary to establish that there is a concern. Preserve the device and any evidence. Contact police (999 or 101). Make a referral to CEOP (ceop.police.uk). Do not copy, share, or delete the content. Refer the matter to the DSL immediately. The school is not responsible for the investigation — that is a matter for police.

🛡

Filtering and Monitoring AI Tools

Traditional filtering systems block known harmful URLs. AI tools present a new challenge — the harm is generated in real time and may not be blocked by a URL filter. Schools need to address AI safety at multiple levels:

Network-Level Filtering

Block access to known AI nudification and image-generation tools at the network level. Maintain a list of prohibited AI tools and update it regularly. Your filtering provider should be able to assist.

Approved Tool List

Maintain a list of DfE-assessed approved AI tools. Pupils should only use AI tools on this list. Any new AI tool must be approved by the DSL and DPO before being introduced in class.

Monitoring and Alerting

Configure monitoring tools to alert the DSL if pupils access AI image-generation sites. Ensure your monitoring covers all pupil devices on school WiFi, including BYOD. Test that alerts are being received.

🗣

Talking to Pupils About AI Safety

Pupils need age-appropriate education on AI risks. Here is a suggested framework for assembly or PSHE lessons:

KS3 (Years 7–9) — Core Messages

  • AI can create realistic fake images of real people — these can cause serious harm
  • Creating or sharing fake intimate images of anyone is illegal
  • It is not your fault if someone creates or shares AI images of you
  • Tell a trusted adult, the DSL, or CEOP if this happens to you or someone you know

KS4 (Years 10–11) — Extended Messages

  • The legal position: Online Safety Act 2023, what "sharing an intimate image" means
  • How to report to CEOP, police, and the school DSL
  • How to get content removed: IWF Report Remove tool, Revenge Porn Helpline
  • Supporting a friend who has been affected — what to say and what not to say

Official Resources and Referral Contacts

Related Safeguarding Resources