5Rights calls out Meta for ignoring AI-generated child sexual abuse material on Instagram

  • Child safety charity, 5Rights has sent a legal letter to Meta detailing how it has ignored reports of illegal child sexual abuse material (CSAM) on its platform, Instagram
  • 5Rights has reviewed reports produced by police unit OCCIT (Online CSEA Covert Intelligence Team) which found evidence of Instagram accounts with as many as 150,000 followers dedicated to sharing both real and AI-generated child sex abuse material.
  • The investigation also found evidence of accounts sharing links to third-party websites which allow users to create their own CSAM.
  • It is a criminal offence in England and Wales to distribute or show to another an indecent pseudo-photograph of a child. This includes AI-generated imagery.
  • 5Rights is urging Meta to remove harmful content and launch an investigation into their inability to routinely enforce their Terms and Conditions.

(London, 25th July 2024) - Children’s safety charity 5Rights has sent a legal letter to Meta accusing the company of ignoring the spread of AI-generated sexualised images of children on its platform, Instagram.

The letter addresses the findings of a report by a police unit OCCIT which found evidence that Instagram is hosting a multitude of public accounts dedicated to sharing real and AI-generated sexualised images of children. These accounts are followed by hundreds of thousands of users, and they include images of children stripping or wearing sexualised superhero costumes. Additionally, these accounts are being recommended to others through in-app features, and some are providing links and information to purchase child sexual abuse material on linked services.

In the most serious cases, these links lead to groups of adult men who are controlling children and directing them to produce CSAM content for sale.

The accounts were first flagged to Meta in January, but nothing was done until a formal inquiry was made by police several months later. Shortly after, the same content was reposted using slightly different names.

The letter from Shillings, the law firm acting on behalf of 5Rights, contends that Meta has failed to proactively identify and remove content that is in violation of both the law and Instagram’s own Community Guidelines.

In doing so, Meta is not only putting both children and adults at risk of encountering illegal and distressing material on the platform, but it is also facilitating the sharing of and profiting from illegal child sexual abuse material.

“AI Child Sexual Abuse involves images of real children, it acts as a gateway drug for child sexual abuse, and it normalises abuse that has long been a stain on the sector. Once again, Meta is failing children. It is clear that the consistent failure to moderate their products, require urgent and radical design changes.”

Baroness Beeban Kidron, Chair of 5Rights Foundation

“Nothing is more important than the privacy and safety of children. At Schillings we’re working tirelessly with 5Rights and the Police to tighten the law and make sure platforms like Meta meet their responsibilities.”

Jenny Afia, Partner at Schillings

“Mainstream social media is enabling CSAM offenders to advertise links to child abuse content. This adds a legitimacy to this crime type, removes the barriers of entry, and increases the risk to real children.”

OCCIT Spokesperson

5Rights is urging Meta to address these risks. The letter calls on the company to immediately remove the content, act robustly against users promoting such content, and publicly share their action plan to avoid similar episodes from reoccurring in the future.

With millions of children using Instagram daily, it is imperative for Meta to keep children safe on their platforms by tackling this widespread issue.

Download the press release here.