AI systems that put children at risk to be banned under EU’s AI Act

On 21 April, the European Commission unveiled its long-awaited proposal for a Regulation on Artificial Intelligence. Responding to the March adoption of UNCRC General comment No. 25 on children’s rights in relation to the digital environment, the draft AI Act – aka the AIA – sets the scene for a radical overhaul of the regulation of the digital world, putting children’s rights at the heart of a broader EU project to design a digital world which people can trust.

It is important, states the bill, to highlight that children have specific rights as enshrined in Article 24 of the EU Charter and in the United Nations Convention on the Rights of the Child (further elaborated in the UNCRC General Comment No. 25 as regards the digital environment), both of which require consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their well-being.

Ban on AI that manipulates or exploits the vulnerabilities of children in a manner likely to cause harm

The draft law thus prohibits “practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children […] in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.”

This could apply to a variety of persuasive design techniques which are ubiquitous in AI systems used by children, deployed to maximise the collection of personal data and to the detriment of the children’s social, mental and physical development, as well as their personal safety.

As explained in our Disrupted Childhood: The Cost of PersuasiveDesign report, automated technology both leverages and reinforces human instinct, in order to trigger habits and behaviours. Variously called ‘reward loops’, ‘captology’, ‘sticky’, ‘dwell features’ and ‘extended use strategies’, persuasive design strategies are deliberately baked into digital services and products in order to capture and hold users’ attention and imprint habitual behaviours. The costs for children are palpable. They include personal anxiety, social aggression, denuded relationships, sleep deprivation and impact on education, health and wellbeing.

Many of these features also promote harmful content to children, and expose them to risks including of sexual exploitation and abuse.

Due diligence provisions for high risk AI systems likely to be accessed by or have impact on children

The AIA also sets out due diligence requirements for AI systems deemed to be “high risk”. These include AI used in toys, in educational and vocational training - whether for access or assessment – and social welfare systems. All biometric identification AI is considered high risk, as is AI used for law enforcement, justice and migration systems.

All such AI systems must obtain prior approval before their use in the EU. They must comply with risk assessment and management measures, and give specific consideration to whether the system is “likely to be accessed by or have an impact on children.” The identified risks should be eliminated or reduced “as far as possible through adequate design and development”.

These provisions should prevent the use of AI systems that have not been designed based on child-centred principles, in a number of critical areas.

These include AI systems that restrict children’s freedom through surveillance in public spaces but also in school settings (e.g. for monitoring exams). They also apply to AI-powered online tracking, monitoring and filtering software on children’s educational devices that can restrict their freedoms (e.g. of expression), breach their right to privacy and perpetuate discrimination.

They would prevent cases of discrimination and exclusion through automated bias such as recently demonstrated in the Netherlands when an AI-powered decision-making system prompted the wrongful withdrawal and forced return of child-benefits from 26,000 families, disproportionately impacting children from ethnic minority backgrounds; or the use of AI for exam grading in the UK which discriminated against children from poorer backgrounds.

What next?

The AIA has some way to go before being signed into law. Along the way the provisions to protect and promote children’s rights must be strengthened to ensure that their likely use of systems is reflected in input data sets, that their needs and interests are fully taken into account in innovation, that they are consulted and listened to, and that no AI systems that are likely to cause them harm fall through the regulatory cracks.

This proposal however shows great promise, and demonstrates the will of the EU to transform the digital sphere into a place where children can be safe and prosper, in Europe and beyond.

With the AIA , as with the Digital Services Act proposal which requires due diligence from online platforms as regards children’s rights, the EU is setting out a regulatory framework to transform the digital sphere into a place where children can be safe and prosper, in Europe and beyond.