AI deepfakes as a live safeguarding issue

AI-generated intimate images of students are no longer a hypothetical safeguarding risk. They are a present reality in secondary schools across England. An Internet Matters report in 2024 described AI-generated sexual imagery as an emerging epidemic, with their survey finding that 13% of British teenagers had experienced nude deepfakes in school contexts. UNICEF's 2026 research, conducted across 11 countries, found that at least 1.2 million children had disclosed having their images manipulated into sexually explicit deepfakes in the preceding year.

The proposed revisions to Keeping Children Safe in Education (KCSIE) 2026, published for consultation by the Department for Education in early 2026, explicitly include AI-generated images and deepfakes within the definition of child-on-child abuse. This is a significant development: it means that schools failing to have a response framework for AI-generated image abuse will be operating outside statutory safeguarding expectations.

For Designated Safeguarding Leads, this requires an urgent review of current procedures, staff training, and student education.

Understanding the nature of the harm

What distinguishes AI deepfake abuse from other forms of image-based sexual abuse is not the harm it causes — which is equivalent to non-consensual intimate image sharing — but the ease with which it can be perpetrated. A student's ordinary social media photograph can be processed through a widely accessible "nudify" application in minutes, without any technical expertise. The resulting image is then weaponised through sharing, threatening to share, or using as leverage for coercive control.

Research published in April 2025 in the journal Behavioural Sciences, drawing on interviews with students and teachers from eight UK schools, found that teachers were "scrambling to keep up" with the rapid development of generative AI, and that sexualised deepfakes were not consistently framed as a safeguarding issue. Students reported having received no explicit education on the topic, even in schools where incidents had occurred.

This gap between the prevalence of the problem and the adequacy of school responses is the central challenge for DSLs.

Your step-by-step response framework

Step 1: Immediate disclosure. When a student discloses that they or another student has been the subject of an AI-generated intimate image, treat it as you would any disclosure of sexual abuse. Take the student to a private space, listen without interrupting, reassure them that they are not to blame, and record the disclosure using your school's standard procedures. Do not ask the student to produce or show you the image.

Step 2: Preserve evidence. If the image has been shared digitally, advise the student not to delete it until evidence has been preserved through appropriate channels. Contact the Internet Watch Foundation (IWF) if the image is circulating online — they have a specialist takedown service for intimate images of under-18s.

Step 3: Safeguarding referral. Under the proposed KCSIE 2026 revisions, AI-generated intimate images constitute child-on-child abuse. Follow your existing referral pathway: contact your Local Authority Designated Officer (LADO) if a staff member is involved, or Children's Social Care and/or police if a student is the perpetrator. The Revenge Porn Helpline also provides specialist support for victims of image-based sexual abuse including AI-generated content.

Step 4: Support the victim. Victims of deepfake abuse experience the same psychological harm as victims of other forms of image-based sexual abuse: shame, anxiety, hypervigilance, and loss of trust. Assign a trusted member of staff as a consistent point of contact. Consider referral to specialist CAMHS provision if the student's wellbeing is significantly affected.

Step 5: Address the perpetrator. If the perpetrator is a student, the response requires both accountability and education. Research emphasises that punitive responses alone are ineffective — young people who use AI to harm others often do not understand the severity of what they have done. A restorative and educative approach, delivered alongside appropriate school disciplinary procedures, is more likely to prevent recurrence.

Building a preventative curriculum

Response frameworks address harm after it has occurred. Prevention requires a curriculum that builds digital literacy, consent education and critical AI awareness before incidents happen. The RSHE 2026 statutory guidance specifically requires schools to address image-based abuse and AI-generated content as part of their online safety provision.

Students need to understand: that nudify applications exist and are widely accessible; that creating, sharing or possessing AI-generated intimate images of another person without consent is a form of abuse; that the creation of such images involving anyone under 18 is illegal under the Sexual Offences Act 2003; and that the harm to the person depicted is real, regardless of whether the image is "real" or generated.

Sources & References

Internet Matters (2024). Research hub. https://www.internetmatters.org/hub/research/

UNICEF (2026). Deepfake abuse is abuse. Press release, February 2026. https://www.unicef.org/press-releases/deepfake-abuse-is-abuse

Ineqe Safeguarding Group (2026). KCSIE 2026 Consultation: A Professional's Briefing. https://ineqe.com/2026/02/20/kcsie-2026-consultation-briefing/

Frith, L., et al. (2025). Sexualized Deepfakes in UK Schools: Understanding and Preventing AI-Generated Image-Based Sexual Abuse Through Better AI Literacies. Behavioural Sciences, 16(4), 554. https://www.mdpi.com/2076-328X/16/4/554

Internet Watch Foundation (2023). How AI Is Being Abused to Create Child Sexual Abuse Imagery. https://www.iwf.org.uk/media/q4zll2ya/iwf-ai-csam-report_public-oct23v1.pdf