5Rights has finalised its first position on the European Commission’s proposal for a Regulation on Artificial Intelligence.
This bill includes an outright ban on AI systems that exploit the vulnerabilities of children. The burden of proof, however, remains on the victims, except for systems pre-determined as “high-risk”, when special consideration of children must be taken.
In this paper, we argue for ex-ante risk assessments to determine which AI systems should be covered by the certification procedures for high-risk AI. We also argue that all AI systems likely to be accessed by or impact on children should be considered high-risk by default.
In order to make the ban on AI that exploits children operational, we call for the precautionary principle to apply, and for the burden of proof to be shifted to the provider or operator of the AI system. Then, we set out how the Regulator must have a duty to investigate AI systems, and the 4-step process that this oversight should imply.