Skip to main content

5Rights calls out Meta for ignoring AI-generated child sexual abuse material on Instagram

Children’s safety charity 5Rights has sent a legal letter to Meta accusing the company of ignoring the spread of AI-generated sexualised images of children on its platform, Instagram.

A wider shot showing a person holding a smartphone with the Instagram logo displayed on the screen. The person’s hands are visible as they hold the phone, and the screen is in focus, while the background is blurred. This is the full version of the close-up shown above.

The letter addresses the findings of a report by a police unit OCCIT which found evidence that Instagram is hosting a multitude of public accounts dedicated to sharing real and AI-generated sexualised images of children. These accounts are followed by hundreds of thousands of users, and they include images of children stripping or wearing sexualised superhero costumes. Additionally, these accounts are being recommended to others through in-app features, and some are providing links and information to purchase child sexual abuse material on linked services.

In the most serious cases, these links lead to groups of adult men who are controlling children and directing them to produce CSAM content for sale. The accounts were first flagged to Meta in January, but nothing was done until a formal inquiry was made by police several months later. Shortly after, the same content was reposted using slightly different names.

The letter from Shillings, the law firm acting on behalf of 5Rights, contends that Meta has failed to proactively identify and remove content that is in violation of both the law and Instagram’s own Community Guidelines.

In doing so, Meta is not only putting both children and adults at risk of encountering illegal and distressing material on the platform, but it is also facilitating the sharing of and profiting from illegal child sexual abuse material.

5Rights is urging Meta to address these risks. The letter calls on the company to immediately remove the content, act robustly against users promoting such content, and publicly share their action plan to avoid similar episodes from reoccurring in the future.

With millions of children using Instagram daily, it is imperative for Meta to keep children safe on their platforms by tackling this widespread issue.