Skip to main content

Inquiry finds UK Government must regulate GenAI and close online safety loopholes

The UK Parliament’s Science, Innovation and Technology Committee urges the Government to regulate generative AI tools and close critical gaps in online safety regulation, echoing calls from 5Rights and civil society.

A laptop is being used with symbols highlighted, including an AI icon and a trophy icon. A child is working on something in background

The UK Parliament’s Science, Innovation and Technology Select Committee has called for the government to regulate the development and use of generative artificial intelligence (GenAI) products and to close loopholes in the Online Safety Act, following calls from civil society groups including 5Rights.

The Committee’s recommendations come after its inquiry into the role of social media in the amplification of misinformation during the July 2024 riots, which escalated from online content to real-world harm. Parliamentarians questioned whether the Online Safety Act, as it currently stands, is adequate to address such risks, especially when GenAI and digital advertising can accelerate the spread of misinformation.

5Rights submitted a response to the inquiry and welcomes the publication of its response and its clear recognition of the urgent need to regulate GenAI.

In our submission to the inquiry, we highlighted the widespread availability of these tools, the low effort and cost required to produce harmful GenAI content and the responsibility of social media companies and search engine operators for hosting and amplifying such harmful content. We strongly welcome the Committee’s conclusion that tech companies’ business models “incentivise the spread of content that is damaging and dangerous”.

The Committee has recommended that the Government pass legislation to protect all citizens, particularly children, from the risks posed by GenAI. It raises concerns about the “serious shortfall in transparency and oversight” of GenAI services and products and underscored the need for Government to “confirm that services are required to act on all risks identified in risk assessments, regardless of whether they are included in Ofcom’s Codes of Practice”, joining calls from the Children’s Coalition for Online Safety.

This call for action comes as new figures published last week by the Internet Watch Foundation (IWF) revealed that 1,286 AI-generated child sexual abuse videos were discovered in the first half of this year, compared to just two during the same period in 2023. The IWF has warned that without decisive intervention, “full, feature length films” of child sexual abuse may be inevitable.

As the risks posed by GenAI grow by the day, the Government has the opportunity and responsibility to lead by regulating the sector and close loopholes in the Online Safety Act, so it can truly deliver on its promises to children. Our Children & AI Design Code offers a practical rights-based model for identifying, understanding and mitigating risks to children throughout the AI lifecycle. We urge the Government to adopt the Code’s pragmatic approach to ensure AI systems are designed, developed and deployed with children’s rights and needs at their core.