5Rights Foundation welcomes landmark UK legislation to protect children from online predators
The UK has just become the first country in the world to introduce laws criminalising tools used to create AI-generated child sexual abuse material (CSAM), marking a watershed moment in the fight to keep children safe in the digital age.
This announcement comes just weeks after 5Rights escalated legal action against Meta for failing to prevent AI-generated child sexual abuse material from being shared and sold on Instagram.
The new legislation, introduced as part of the forthcoming Crime and Policing Bill, will make it illegal to create, possess or distribute AI tools used to generate CSAM, criminalise the possession of manuals teaching how to create these images and operating platforms facilitating CSAM and grooming.
These critical measures close dangerous loopholes and ensure that AI does not become a tool for abusers but remains subject to our laws, our values, and our duty to protect children.
5Rights’ legal action against Meta exposed AI-fuelled exploitation
The new laws follow mounting pressure from 5Rights’ campaign against AI-generated CSAM, including a formal legal complaint against Meta submitted to Ofcom and the UK Information Commissioner’s Office (ICO).
Investigations by 5Rights and a specialist law enforcement unit uncovered:
- AI-generated CSAM being sold and shared on Instagram despite Meta’s legal obligations to prevent such content.
- Meta’s algorithm actively promoting accounts advertising AI-generated CSAM.
- Repeated failures by Meta to detect and remove illegal material, despite direct warnings and legal notifications.
- A lack of a clear reporting process, forcing police to escalate cases formally to get any action taken.
In response, 5Rights reported the company to Ofcom and called on the ICO to take action against Meta.
A long-fought victory for child protection
Following months of discussions with the government, Baroness Beeban Kidron, Crossbench Peer and Chair of 5Rights Foundation, said:
“It has been a long fight to get the AI Child Sexual Abuse Offences into law, and the Home Secretary’s announcement today that they will be included in the Crime Bill, is a milestone. AI-enabled crime normalises the abuse of children and amplifies its spread. Our laws must reflect the reality of children’s experience and ensure that technology is safe by design and default.
I pay tribute to my friends and colleagues in the specialist police unit that brought this to my attention and commend them for their extraordinary efforts to keep children safe. All children whose identity has been stolen or who have suffered abuse deserve our relentless attention and unwavering support. It is they – and not politicians – who are the focus of our efforts.”
Leanda Barrington-Leach, Executive Director of 5Rights Foundation, also welcomed the announcement:
“We are delighted to see the UK government today respond to our call on behalf of children and police and announce the inclusion of AI child sexual abuse offenses in the Crime Bill. Criminalising paedophile platforms, tools and handbooks makes it clear that there is no place for child sexual abuse in our society – whether generated using an AI or not. It also provides police forces, already facing huge pressure from tech-enabled exploitation of children, with the means to act against these most heinous of offenses.
The message today is that AI must respect our laws, our values, and our children. The Online Safety Act requires that tech be safe by design and default. With this announcement, the Government has added a clear “or else”.”
The growing threat of AI-generated CSAM is well-documented. The Internet Watch Foundation (IWF) reports a 380% increase in cases of AI-generated CSAM over the past year, with some material so realistic that it is indistinguishable from real-life abuse. AI tools are being used to “nudify” images of real children, manipulate their voices, and even blackmail victims into further exploitation.
The rapid evolution of AI presents enormous opportunities, but without regulation, it can be weaponised to harm children at an unprecedented scale.
Saturday’s announcement is a victory for child protection, but it must be followed by action from the UK Government to ensure that the opportunities and risks for children are being considered in its plans for the full-scale development and deployment of AI technologies into our economy and public services which they have no choice but to use.