Skip to main content

5Rights Foundation welcomes landmark UK legislation to protect children from online predators  

The UK has just become the first country in the world to introduce laws criminalising tools used to create AI-generated child sexual abuse material (CSAM), marking a watershed moment in the fight to keep children safe in the digital age.

This announcement comes just weeks after 5Rights escalated legal action against Meta for failing to prevent AI-generated child sexual abuse material from being shared and sold on Instagram.   

The new legislation, introduced as part of the forthcoming Crime and Policing Bill, will make it illegal to create, possess or distribute AI tools used to generate CSAM, criminalise the possession of manuals teaching how to create these images and operating platforms facilitating CSAM and grooming.  

These critical measures close dangerous loopholes and ensure that AI does not become a tool for abusers but remains subject to our laws, our values, and our duty to protect children.  

The new laws follow mounting pressure from 5Rights’ campaign against AI-generated CSAM, including a formal legal complaint against Meta submitted to Ofcom and the UK Information Commissioner’s Office (ICO).  

Investigations by 5Rights and a specialist law enforcement unit uncovered:  

  • AI-generated CSAM being sold and shared on Instagram despite Meta’s legal obligations to prevent such content.  
  • Meta’s algorithm actively promoting accounts advertising AI-generated CSAM.  
  • Repeated failures by Meta to detect and remove illegal material, despite direct warnings and legal notifications.  
  • A lack of a clear reporting process, forcing police to escalate cases formally to get any action taken.  

In response, 5Rights reported the company to Ofcom and called on the ICO to take action against Meta.  

A long-fought victory for child protection  

Following months of discussions with the government, Baroness Beeban Kidron, Crossbench Peer and Chair of 5Rights Foundation, said:   

Leanda Barrington-Leach, Executive Director of 5Rights Foundation, also welcomed the announcement:  

The growing threat of AI-generated CSAM is well-documented. The Internet Watch Foundation (IWF) reports a 380% increase in cases of AI-generated CSAM over the past year, with some material so realistic that it is indistinguishable from real-life abuse. AI tools are being used to “nudify” images of real children, manipulate their voices, and even blackmail victims into further exploitation.  

The rapid evolution of AI presents enormous opportunities, but without regulation, it can be weaponised to harm children at an unprecedented scale.   

Saturday’s announcement is a victory for child protection, but it must be followed by action from the UK Government to ensure that the opportunities and risks for children are being considered in its plans for the full-scale development and deployment of AI technologies into our economy and public services which they have no choice but to use.