5Rights’ Coalition help secure EU ban on nudifying AI but gaps remain
Advocacy efforts led by 5Rights over the past few months helped secure key protections for children in the EU’s Digital Omnibus on AI. But significant setbacks on children’s privacy and toy safety continue to leave children exposed to harms.

Leading a coalition of children’s rights, mental health and families’ organisations, 5Rights has helped maintain transparency requirements for high-risk AI systems and secure an EU ban on nudifying and sexualising AI.
This is the culmination of months of advocacy efforts to ensure that the Digital Omnibus on AI did not weaken key protections for children and addressed exponentially increasing risks that they are facing from nudifying AI tools.
In a joint letter in February, the Coalition demanded that nudifying AI systems and functionalities be prohibited under the AI Act, and that providers be required to prevent the generation of sexualised or nudifyied content. The EU co-legislators have supported this call by banning nudifying and sexualising AI systems, as well as systems that create child sexual abuse material. This prohibition applies both to providers and deployers, covering both systems built with the purpose of creating such content, as well as systems without effective safeguards to prevent it.
“Nudifying and sexualising AI systems serve no purpose other than exploit women and children. It was imperative that the regulation made very clear that these tools have no place on the European market.“
Manon Letouche, Head of EU Affairs, 5Rights Foundation
5Rights and the Coalition also urged the European Parliament and Council to reject the European Commission’s proposal to delete the transparency requirements and registration obligations for high-risk AI systems likely to impact children, echoing a broader call from many CSOs to restore such safeguards for all AI systems touching on fundamental rights. The co-legislators responded positively, reducing the delay proposed for the application of transparency obligations on AI-generated content, a key demand from children and their advocates.
The picture is not entirely encouraging, however, as other delays in the implementation of the AI Act will leave children exposed to risks in the sectors where it matters most, such as education, healthcare, justice, or law enforcement.
Moreover, co-legislators’ attempts to narrow the scope of the AI Act, notably by excluding toys from the high-risk classification, was strongly opposed by 5Rights, consumer rights and human rights groups. Toys were ultimately kept in scope but the narrower definition of what qualifies as ‘safety component’ may still leave gaps in protecting children from AI-related risks and undermine the recently adopted EU framework on toy safety.
Most worryingly, the Digital Omnibus on AI creates a loophole in children’s rights to privacy. Personal data – including children’s – can now be processed to detect and correct biases in AI systems, including high-risk ones. The Data Omnibus proposal takes this a step further, making AI training a legitimate interest for data processing. This would represent a significant rollback in the measures currently shielding personal data, including children’s, from AI systems.
“Children are entitled to specific protections from data-driven practices that undermine their rights and expose them to risks. They are direct users of AI – twice as much as adults – and are strongly impacted by it. The EU regulation must be unambiguous: children’s personal data cannot be exploited by these systems.“
Manon Letouche, Head of EU Affairs, 5Rights Foundation
Simplification cannot come at the expense of children’s rights online. The EU must stand firm in its commitment to protecting children in the digital environment and ensure that their rights and needs are upheld in the Digital omnibuses.