AI systems that exploit the vulnerabilities of children are now illegal in the EU
On 2 February, a critical provision of EU law entered into force: Article 5 of the EU’s Artificial Intelligence Act prohibits AI systems that exploit the vulnerabilities of age. To ensure AI systems that illegally manipulate developing minds are rooted out of the EU market, the European Commission published Guidelines that integrate a number of key provisions proposed by 5Rights.
In particular, the Guidelines:
- Explicitly recognise the risks posed by addictive and exploitative AI design features;
- Flag AI systems that mimic human interaction, such as chatbots and virtual assistants, as particularly harmful to children, given their potential to foster emotional dependencies and cause real-world harm;
- Recognise how AI-driven harm may accumulate over time, specifically acknowledging that AI risks acceptable for adults may be unacceptable for children – a necessary shift from the industry’s one-size-fits-all approach;
- Clarify that the ban on AI exploiting vulnerabilities due to age applies to all under 18s and in light of the UN Convention on the Rights of the Child (UNCRC).
The fight continues
While we celebrate this milestone, significant gaps remain to ensure AI systems cater to children’s rights and needs by design and default. The AI Act does not require companies to design AI with children’s best interests in mind.
The Act also fails to recognise children’s diversity and intersecting vulnerabilities: age, cultural backgrounds, disability, evolving capacities, and socioeconomic status all shape how children experience AI.
Furthermore, the AI Act primarily considers children as direct users of AI systems, neglecting how AI-driven decisions in school, law enforcement, and healthcare can indirectly impact them. Without addressing these broader implications, key risks to children’s rights could remain overlooked.
We are particularly concerned about the exceptions for “systems that infer emotions in workplaces and schools where there’s a medical or safety justification”, like systems designed for therapeutic use. These systems can pose serious risks in educational environments, opening the door to intrusive surveillance and misuse.
Over the coming months and years, 5Rights will lead the charge for the AI Act to deliver for children, including by developing and publishing practical tools for implementation and monitoring and raising complaints in cases of non-compliance. We will also continue to work for the remaining gaps in AI regulation to be plugged so all children can benefit from AI systems that are designed to foster a better future for all.