Today, 5Rights Foundation published its report Shedding light on AI: A framework for algorithmic oversight.
Artificial intelligence (AI) is central to the digital world. It is not a standalone or fixed technology but plays a part in automated decision-making (ADM) systems and many other data-driven features common across digital services. Automated systems shape the experiences of children and young people in the digital world, both as a result of their direct engagement and from systems that they may not interact with directly. It can support children to navigate the online world and the mass of content available and help them to identify activities and outcomes that are useful or beneficial to them. But there are also many situations when automated decision-making systems undermine their rights or put them at risk.
New online safety legislation in Europe and legislative proposals across the world, such as in the UK, Canada and Australia offer a vision of what a responsible digital world looks like. However, to meet the objectives of online safety legislation in both spirit and letter, regulatory authorities must not only have the tools but a duty to investigate algorithms on behalf of children, and an agreed standard by which to assess them.
Our report sets out a duty of this kind with a four-step process, also known as the 4 Is of AI oversight. This four-step process is platform neutral and can be applied across different sectors, including but not limited to social media, entertainment, health and education. It can also be applied to different parts or features of a service, including advertising, content recommendation, moderation and reporting.
Following the 4 steps of AI oversight (the 4 Is) will help digital service providers mitigate the harmful impacts of AI systems on children and give regulators a way to inquire, analyse and assess whether a system is conforming to requisite standards.
You can read about our four-step process here.