Skip to main content

Australia: Children’s online safety measures must address systemic harms 

On Tuesday 10 September, the Australian Government announced plans to set a minimum age limit for children using social media. The social media ‘ban’, which will require tech companies to restrict access to services for users under a yet-to-be-specified age using age assurance technologies, is a response to growing demands for robust and immediate action to stop widespread tech practices that that are having a significant and growing impact on children and young people’s safety, health and wellbeing.

In parallel, the Government also announced its intention to bring forward a Children’s Privacy Code based on the UK’s Age Appropriate Design Code (AADC). Already exported globally and now in its third year, the AADC has led to a swathe of design changes by tech companies that have strengthened protections for children’s data privacy, thereby enhancing their safety. 

“We applaud the commitment and initiative of the Government of Australia and the pioneering e-safety Commissioner to trial new measures to interrupt and reverse the harms children are facing online. Children should not be in spaces that are not safe for them, but the aim must be to make the digital environment safe so children can participate. We trust that the dual approach being taken in Australia will drive rapid change, so children are recognised online and digital services are redesigned for them to thrive,” said 5Rights Executive Director Leanda Barrington-Leach. 

Speaking to media in the wake of the social media ban announcement, 5Rights Chair and Founder Baroness Beeban Kidron considered the questions the Government must now address – starting with the age to set for access, how to define “social media” and the technical feasibility of enforcement. 

“A complete ban on access to social media, as is under consideration, raises multiple issues. Firstly, it puts pressure on parents to police it or to collude with their children to circumvent it. Secondly, it raises the difficult question of what age to determine children should be allowed to access social media – 15, 16 and 17-year olds are all developmentally vulnerable, and children younger than that have a right to access information and opportunities. Australia will also have to decide what services should be categorised as social media, and consider the unintended consequence that this move could have of pushing kids into using virtual private networks (VPN’s) or into even less savoury parts of the digital world. The wide implementation of age assurance technology will also need to be tightly managed,” Baroness Kidron noted. 

She also called for a more holistic approach, suggesting the Government consider banning harmful design rather than focusing on age gating social media.  

“For instance, as well as trialling the use of age assurance, the Australian Government should also trial ‘banning’ algorithmic recommendation of toxic material like self-harm and pro-suicide content to children and the use of compulsive design strategies – such as endless scroll, notifications and popularity metrics – both of which are tech sector norms and treat children’s mental and physical wellbeing as collateral damage. If it were to do this, Australia could, overnight, put itself in a position to lead on a more comprehensive approach to safety by design by widescale trialling of banning harmful design, rather than simply introducing age assurance. Dealing with the root cause would put Australia at the cutting edge of global online child safety,” said Baroness Kidron. 

The news comes in the context of an ongoing parliamentary inquiry into the negative impacts of social media, as well as a high-profile media campaign led by concerned parents.