Photo by Oleg Magni on Unsplash

The home secretary is calling on tech companies like Facebook to do “their moral duty” and stop rolling out end-to-end encrypted messaging services, until safeguards for children are in place.

Social media companies currently use a range of tools to help detect and flag potentially abusive content, such as child abuse images, or grooming in messages.

Enabling end-to-end encryption for all users, which means messages can only be seen by the sender and receiver, could render the current tools useless.

The NSPCC estimates that 70 per cent of all child abuse reports could be lost if end-to-end encryption plans are put in place without new abuse detection tools.

What did the home secretary say?

On Facebook’s encryption plans, Priti Patel said, “The offending will continue, the images of children being abused will proliferate — but the company intends to blind itself to this problem through end-to-end encryption which prevents all access to messaging content.”

“This is not acceptable. We cannot allow a situation where law enforcement’s ability to tackle abhorrent criminal acts and protect victims is severely hampered.”

What do the NSPCC say?

Alison Trew, the NSPCC’s Senior Policy and Public Affairs officer, said, “The past year of the pandemic has shown just how dangerous the online world can be for children.

“We’ve seen the threat change, we’ve seen how more children and abusers are spending more time on live streaming and video chatting. And especially with private messaging, which is where the majority of sexualized images will be sent, where a lot of grooming will happen.

“It’s really concerning, to hear that companies may be rolling out end-to-end encryption without the appropriate safeguards in place.”

Picture of a mobile phone in the dark.
Photo by Rami Al-zayat on Unsplash

How does current anti-abuse technology work?

Richard Hale, a Digital Forensics Lecturer at Birmingham City University, explains the current abuse detection tools as a mixture of databases and AI.

“Images are classified by a trained analyst, who determine the legality and severity of a suspect image or video.”

“They are then given a value (called a ‘hash’), which is how these ‘known images’ are located.”

“There are also “smart tools” that measure everything from nudity level to details about the image. Object Character Recognition is used for ‘common words/phrases’ associated with grooming or the trade in images/videos.”

What next for anti-abuse technology?

Richard Hale thinks the next step for abuse detection is dependent on the companies running the messaging services.

“The bigger the company, the more likely they are to want to sell their services and benefit of privacy. This means the encryption gets better, and the tools have a mountain to climb to detect criminal activity.”

“The real truth of the matter is we only catch the stupid criminals, the clever ones evolve often quicker than those pursuing them can.”

Getting the balance between child protection and privacy is tricky, says Richard.

“On the one hand we should all want to do our part to stamp out this vile crime, on the other, we all hold elements as being very private and would not want them easily captured.”

“Having worked in the industry, I see the arguments for both sides which makes me often have arguments with myself on how much freedom to my information is too much.”

What have Facebook said?

A Facebook spokesperson has said, “Child exploitation has no place on our platforms and Facebook will continue to lead the industry in developing new ways to prevent, detect and respond to abuse.”

“Its full rollout on our messaging services is a long-term project and we are building strong safety measures into our plans.”

A journalist for Birmingham Eastside. Multiplatform and Mobile Journalism MA.

Leave a comment

Your email address will not be published. Required fields are marked *