This programme of research explores how artificial intelligence (AI) is being used in young people’s lives and educational settings, and how it can be misused to cause harm, including the creation and sharing of deepfakes.
By working closely with young people, educators, parents, and frontline practitioners, the project aims to strengthen prevention, improve responses to AI-facilitated abuse, and support safer digital environments in schools and beyond.
This strand works directly with young people aged 13–18 to explore how they understand and experience AI in their everyday lives, relationships, and online interactions. Using creative and participatory methods, such as scenario-based discussions and mind-mapping, the research captures young people’s perspectives on AI-facilitated harms, including deepfakes and image-based abuse. These insights ensure that young people’s voices are central to shaping effective prevention and education strategies.
This strand brings together teachers and educators to examine their existing knowledge of AI, the challenges they face in responding to AI-related harms, and the support they need in practice. Through focus groups and follow-up roundtable discussions, educators will reflect on emerging risks, safeguarding concerns, and the impact of AI on teaching and student wellbeing. Findings will directly inform teacher training, policy guidance, and the development of practical resources for schools.
Working in partnership with practioners from the South West Grid for Learning and the Revenge Porn Helpline, this strand analyses anonymised case data and conducts interviews with frontline practitioners supporting victims of AI-facilitated abuse. The research explores patterns of harm, help-seeking, and response pathways, with a particular focus on image-based abuse and deepfakes. This practitioner-led insight strengthens understanding of real-world impacts and informs more effective prevention, intervention, and policy responses.
For more information on any of these projects, contact [email protected]