5Rights calls out Meta for ignoring AI-generated child sexual abuse material on Instagram
Children’s safety charity 5Rights has sent a legal letter to Meta accusing the company of ignoring the spread of AI-generated sexualised images of children on its platform, Instagram.
The letter addresses the findings of a report by a police unit OCCIT which found evidence that Instagram is hosting a multitude of public accounts dedicated to sharing real and AI-generated sexualised images of children. These accounts are followed by hundreds of thousands of users, and they include images of children stripping or wearing sexualised superhero costumes. Additionally, these accounts are being recommended to others through in-app features, and some are providing links and information to purchase child sexual abuse material on linked services.
In the most serious cases, these links lead to groups of adult men who are controlling children and directing them to produce CSAM content for sale. The accounts were first flagged to Meta in January, but nothing was done until a formal inquiry was made by police several months later. Shortly after, the same content was reposted using slightly different names.
The letter from Shillings, the law firm acting on behalf of 5Rights, contends that Meta has failed to proactively identify and remove content that is in violation of both the law and Instagram’s own Community Guidelines.
In doing so, Meta is not only putting both children and adults at risk of encountering illegal and distressing material on the platform, but it is also facilitating the sharing of and profiting from illegal child sexual abuse material.
“AI Child Sexual Abuse involves images of real children, it acts as a gateway drug for child sexual abuse, and it normalises abuse that has long been a stain on the sector. Once again, Meta is failing children. It is clear that the consistent failure to moderate their products, require urgent and radical design changes.”
Baroness Beeban Kidron, Founder and Chair of 5Rights
5Rights is urging Meta to address these risks. The letter calls on the company to immediately remove the content, act robustly against users promoting such content, and publicly share their action plan to avoid similar episodes from reoccurring in the future.
With millions of children using Instagram daily, it is imperative for Meta to keep children safe on their platforms by tackling this widespread issue.