5Rights Foundation Escalates Legal Action Against Meta Over AI Generated Child Sexual Abuse Material on Instagram
After Meta ignored our legal letter concerning the spread of AI-generated sexualised images of children on its platform, Instagram, we have escalated proceedings. Despite repeated warnings since July, Meta has continued to fail to meet legal obligations or improve its moderation system. As such, we’ve reported the company to the UK media regulator, Ofcom and called on the Information Commissioner’s Office to take action.
An investigation from a specialist police unit has uncovered that Meta continues to fail to detect and remove child sexual abuse material (CSAM). This failure to effectively police their own platform is in violation of the law and their own Community Guidelines.
Efforts to raise these troubling findings to Meta have been blockaded by their inadequate reporting function. Meta’s abuse-related public addresses are no longer being monitored, and whilst Instagram does have in-app reporting functions, efforts to draw attention to these CSAM-promoting accounts have received no response.
“It is appalling that a company of Meta’s size and resources continues to fail in its duty to protect children. AI-generated child sexual abuse material is readily available now through Instagram. Meta has the tools and means necessary to address this but has chosen not to act effectively.”
Baroness Beeban Kidron, Chair and Founder of 5Rights Foundation
The app, therefore, continues to not only host CSAM but, through its algorithmic recommendation system, openly promotes accounts that advertise AI-generated CSAM. Such goes beyond ethical failings.