Preventing Technology-Facilitated Violence and Deepfake Sexual Abuse: Supporting Teens and Educators

Prof Tanya Horeck’s impact-focused research is on gender-based tech-facilitated violence, and ways to prevent it. Her work seeks to empower young people to navigate digital spaces safely and to assist teachers and parents in supporting them.

Four teenagers lounging on/against a green sofa. Three are looking at a laptop together, while the other is looking at a tablet and wearing headphones

With her collaborators Prof Jessica Ringrose (UCL) and Prof Kaitlynn Mendes (Western University, Canada), Prof Horeck has worked on two AHRC funded projects: the first on young people’s experiences of online harms during Covid-19, and the second on School Based Digital Defence and Activism Lessons. The findings from these projects will be published in a forthcoming book, Teens and Tech-Facilitated Violence (Oxford University Press, 2026).

Prof Horeck’s current impact project is on the rise of AI-facilitated harms. An Internet Matters report in 2024 called AI-generated sexual imagery an emerging ‘epidemic’ after their survey showed 13% of teenagers have had an experience with nude deepfakes in British schools. This was echoed in an US report by Thorn which found "1 in 10 minors said they know of cases where their friends and classmates have created synthetic non-consensual intimate images (or 'deepfake nudes') of other kids using generative AI tools".

How to counter non-consensual synthetic created media is therefore an urgent challenge, with schools needing help in how to prevent such violence and promote ethical digital behaviour. Working with educational partners, Prof Horeck is leading a study on how to strengthen prevention initiatives in school. She is conducting focus groups with teachers and young people to find out what they know about deepfakes and the types of strategies, policies, and supports (i.e. from schools, parents, and wider society) they think could help mitigate these harms.

In partnership with leading internet safety charity, the South West Grid for Learning, her research team will also be analysing cases of synthetic image abuse from the Revenge Porn Helpline and the Professionals Online Safety Helpline.

This data analysis will help to assess the state of the problem of deepfake sexual abuse in the UK and determine ways to better support victim-survivors. As part of the project, interviews will be conducted with helpline operators to get first-hand accounts of the issues they are facing. The findings from this study will be published in a public report in 2026.

See also

REF 2021 case study: Enhancing policies and practice in secondary school sex education to reduce image-based sexual abuse and improve internet safety for teenagers