Skip to main content

UK’s AI Opportunities Action Plan overlooks risks and potential for children

This week, the UK Government published its response to the long-awaited AI Opportunities Action Plan, outlining its intention to implement all 50 measures and see rapid full-scale adoption of AI products and systems into the UK economy. The recommendations include details on piloting AI technologies in public services, including education. However, as with previous plans, children’s rights, wellbeing and safety are not being considered. 

Two young black girls are on a computer. The younger of the two is in the foreground, concentrating on the older of the two typing on the keyboard.

With the potential to support their education, transform the services they use and offer them innovative experiences to play and create, AI technologies can greatly benefit children. 

But while the opportunities are palpable, so are the risks. Research shows that the development of ethics surrounding the use of AI are ignoring children, not considering their best interests, their different development stages, backgrounds and characteristics. Every child using an AI-enabled product or service should expect that they are being treated fairly. Unfortunately, there is already evidence this is not happening, causing children to feel insecure and confused about the way they look because they don’t fit the “norm”. 

Despite this, children – who are already using these technologies at home and are commonly subject to them in school – are still not being considered in conversations about the use of AI.

This is deeply concerning, especially in light of the Plan’s noted recommendation to “move fast and learn things” when piloting these technologies in schools. With real children being the test subjects in this pilot programme, we are particularly concerned about policymakers supporting the use of this technology to support the most vulnerable children with special educational needs, despite its unproven efficacy and safety.

Across the globe, regulatory efforts – such as the EU’s AI Act – are being made to provide a level of scrutiny for unchecked AI systems. Against the forces of a fast-moving sector and the unprecedented development of these technologies, the protection of children’s safety, rights and well-being must be prioritised.

5Rights will soon be launching a Code of Conduct for AI to provide a standard for the design, deployment and governing of AI systems to meet children’s rights, needs and well-being. 

The opportunity now is for the UK Government to introduce bold regulations to protect children and their data, setting a precedent for current and future AI development to meet their best interests.