ARU works on tech to tackle online child abuse

Published: 23 November 2021 at 14:01

A hand typing on a keyboard

PIER’s partnership with AI technology company receives Government funding

Anglia Ruskin University (ARU) has won Government funding to help develop a new way of tackling online child abuse.

ARU’s Policing Institute for the Eastern Region (PIER) is working with SafeToNet to expand their SafeToWatch artificial intelligence (AI) technology. 

The partnership is one of five projects from across the UK and Europe to receive backing as part of the £555,000 Safety Tech Challenge Fund, administered by the Department for Digital, Culture, Media & Sport and the Home Office.

Social media companies are increasingly using end-to-end encryption, which improves privacy for users but simultaneously makes the detection of illegal content more difficult for law enforcement agencies. The Safety Tech Challenge Fund is focusing on initiatives to tackle child sexual abuse material despite this growth in end-to-end encryption.

SafeToWatch uses AI technology to block specific video content from being created at source.  Rather than relying on buy-in from third parties, such as social media companies, SafeToWatch uses a device’s camera app to identify inappropriate images and prevent them from being filmed.  

The Government funding – an initial £85,000 over the next five months – will see this SafeToWatch technology developed so it can be trained to recognise child sexual abuse material in real time and prevent it from being created.  In future, SafeToWatch could potentially be installed as standard on any smart device.

As part of the partnership, experts from ARU will analyse and label the data collected to improve the effectiveness of the technology’s AI algorithms.  

Professor Samantha Lundrigan, Director of the Policing Institute for the Eastern Region (PIER) at ARU, said:

“PIER is delighted to be supporting SafeToNet on the development of its ground-breaking SafetoWatch technology. 

 
“The SafetoWatch tool will involve the development of device-level artificial intelligence to prevent the uploading and sharing of indecent images.  This is crucial for improving the protection of children, particularly as the use of end-to-end encryption continues to grow.”