Skip to main content

AI regulation must keep up with protecting children  

AI is reshaping our daily lives at an unprecedented pace, yet its impact on children remains largely overlooked. While companies race to develop and deploy AI systems, the specific needs and vulnerabilities of young users are often neglected. Encouragingly, legislative frameworks aimed at safeguarding children from increasingly powerful and unchecked AI systems are beginning to take shape.

A wider view of the Internet Governance Forum 2024 entrance in Riyadh, capturing the sign as well as a bustling crowd entering the venue. The image features a mix of participants, including men in traditional Middle Eastern attire and visitors in casual and formal clothing, set against the backdrop of an elaborately decorated hall with luxurious architectural details.

But, how can we ensure that children’s rights are central to the conversation about AI’s future? At this year’s UN Internet Governance Forum (IGF), 5Rights hosted a panel on the practical implementation of AI regulation to address emerging risks for children’s rights. 

Children all around the world are increasingly interacting with AI, often without realising it. Research shows that in the UK, children are twice as likely as adults to use generative AI for a number of purposes. Nidhi, a 5Rights Youth Ambassador from India, stressed the growing ubiquity of AI in products and services that children use in their daily lives. This naturally poses serious risks to their privacy, mental health, and education, as children interact with AI being developped without their needs or protections in mind.

Empirical evidence supports these concerns. Dr. Jun Zhao from the University of Oxford pointed out that while AI offers exciting opportunities, it is not being designed with children’s privacy and best interests in mind. For instance, AI systems can collect a wide range of sensitive information, including behaviour data, often without children’s express consent. Companies claim that individual children are unlikely to experience negative consequences from the collection of data on their behaviour. But the stark reality is that this is a violation of children’s right to privacy, with profound and potentially far-reaching consequences that demand urgent attention and accountability. 

Recognising these risks, regulatory efforts like the EU’s AI Act have introduced critical provisions for children. The AI Act is a necessary step forward, but its significance also lies in how it complements with other key pieces of legislation, namely the Digital Services Act, which are driving the EU to build a “proper framework of protection”, as emphasised by MEP Brando Benifei. 

While regulatory efforts and technical standards across the world offer first steps, they often provide limited guidance, and are often lagging behind the rapid development of AI systems. Dr. Ansgar Koene further pointed out that companies must do more to consider the real-world implications of their AI systems on children and their rights. 

To bridge this gap, Baroness Beeban Kidron will soon launch a Code of Conduct for AI. This Code will propose actionable measures to design, deploy, and govern AI systems that respect children’s rights and needs. It aims to complement existing regulatory initiatives and provide a standard for jurisdictions considering introducing new legislation or regulation. 

As Baroness Kidron said, the AI industry has “privatised the wealth and outsourced the societal cost onto the shoulders of children”. Without coherent global guidance and strong governmental action, this burden will become unbearable.  

The Code of Conduct for AI will be unveiled in early 2025 .