Safer Internet Day 2026: Let's keep AI safety on the agenda
As part of Safer Internet Day 2026, IPPPRI Director Prof Sam Lundrigan discusses the institute's recent research on the topic of AI-generated child sexual abuse material, and shares her advice on talking to young people about internet safety.
The theme of this year’s Safer Internet Day is ‘Smart tech, safe choices – Exploring the safe and responsible use of AI.’ This is a topic we explore at length at IPPPRI, and one that should remain at the top of the agenda for any organisations with a responsibility for keeping young people safe online.
We know just how quickly the scale of the threat linked to AI is growing. Tech by its very nature is rapidly evolving, and we must accept that this means tech-enabled abuse will continue to grow at an exponential rate whilst the landscape as we know it remains unchanged.
Tackling an increasing threat
We also know that the demand for AI-generated child sexual abuse material online is growing. Our own research published last year found evidence of growing interest in this technology, and of online offenders’ desire for others to learn more and create abuse images. A year on from this, we can only imagine the scale by which this will have now grown.
Last year saw the first commitment by the UK government to introduce new laws to make it illegal to possess, create or distribute AI tools designed to generate child sexual abuse material (CSAM), punishable by up to five years in prison. The laws will also make it illegal for anyone to possess so-called “paedophile manuals” which teach people how to use AI to sexually abuse children.
We then saw additional legislation tabled, giving designated bodies like AI developers and child protection organisations the ability to scrutinise AI models, and ensure safeguards are in place to prevent them generating or proliferating child sexual abuse material, including indecent images and videos of children.
We have welcomed this progress at every step, but progress cannot remain this slow when dealing with the fastest growing type of crime enabler out there – technology. It is pointless designing laws to prevent tech-enabled harm that can become ‘out of date’ before they are implemented. We must learn and react faster.
Bringing AI into the conversation
We must also make AI part of the social media safety conversations that I hope are now becoming more commonplace in homes around the world.
We know that the best way to protect young people from harm online is to equip them with the knowledge and understanding they need to stay safe, but also to feel confidence in speaking with their parents, guardians or any trusted adults about any concerns they have.
Our research for the Internet Watch Foundation’s ‘Think Before you Share’ campaign showed this without doubt – talking works.
Now we need to further develop this and make sure that the conversations being held between teachers, parents and caregivers and young people, reflect the reality they are facing.
It's not enough to simply talk about the risk of sharing intimate images, grooming risks and exploitation. We now must also talk about AI.
This is the language of today’s world for young people growing up in a supremely digital age. We have to understand it and we have to talk them about it.
Talking to young people about internet safety
Every young person should know that not every image they see online is real, that any completely innocent image can be manipulated and, perhaps most importantly, that AI-generated images of abuse are never ‘harmless’.
The NSPCC have shared excellent safety tips on how to support young people in using AI safely, which provide an excellent starting point for addressing this with young people (copied below).
Now we must build on this to adapt our legislation, our curriculum, our criminal justice system and our support for victims and survivors to reflect the reality of an AI-powered digital world.
1. Talk about where AI is being used
A good place to start is by having open conversations with your child about where they are seeing AI tools and content online. This is an opportunity to talk about the risks and benefits they are experiencing.
2. Remind young people not everything is real
You can remind them that not everything online is real and much of what we see may have been edited.
AI is continually evolving – there can be common indicators to show something is AI-generated, but remember it is not always obvious. Some of these indicators can be an overall ‘perfect’ appearance, and body parts or movements appearing differently or not looking ‘true to life’.
3. Discuss misuse of generative AI
It’s important to address the misuse of generative AI to create harmful content in an age-appropriate way. Make sure that your child knows it’s not OK for anyone to create content to harm other people. If they ever experience this or are worried about someone doing it, then they can report that.
If you are concerned about how someone is behaving towards a child online, this can be reported to law enforcement agency CEOP. If a sexual image or video has been created, this can be reported via Report Remove.
4. Remind them to check sources
AI summaries and chatbots can be helpful tools to get quick answers to a question, but it’s important to know it’s coming from a reliable source.
Sources should be listed and will often have links so they can be checked. If the source is not listed or is not a reliable source, it’s good to encourage them to check a trusted site for themselves.
5. Signpost to safe sources of health and wellbeing advice
We know young people will use the internet to get advice and answers to questions which could mean they come across advice from an AI bot or summary. It’s important they access safe information from reliable sources, so it can be helpful to make sure they know of child-friendly safe sites such as Childline.
6. Make sure they know where to go for help
Ensure your child knows they can talk to you or another safe adult like a teacher if anything worries them online or offline. They can also contact Childline 24/7 on 0800 11 11 or via email or online chat – there are lots of ways they can get support.
Prof Sam Lundrigan, Director, IPPPRI