Skip to main content

5Rights Foundation Escalates Legal Action Against Meta Over AI Generated Child Sexual Abuse Material on Instagram

After Meta ignored our legal letter concerning the spread of AI-generated sexualised images of children on its platform, Instagram, we have escalated proceedings.  Despite repeated warnings since July, Meta has continued to fail to meet legal obligations or improve its moderation system. As such, we’ve reported the company to the UK media regulator, Ofcom and called on the Information Commissioner’s Office to take action.

Close-up of a young person on a smartphone that displays the Instagram logo, a simplistic outline of a camera, with the words "from Meta" underneath. The background is blurred.

An investigation from a specialist police unit has uncovered that Meta continues to fail to detect and remove child sexual abuse material (CSAM). This failure to effectively police their own platform is in violation of the law and their own Community Guidelines. 

Efforts to raise these troubling findings to Meta have been blockaded by their inadequate reporting function. Meta’s abuse-related public addresses are no longer being monitored, and whilst Instagram does have in-app reporting functions, efforts to draw attention to these CSAM-promoting accounts have received no response.

The app, therefore, continues to not only host CSAM but, through its algorithmic recommendation system, openly promotes accounts that advertise AI-generated CSAM. Such goes beyond ethical failings.