5Rights launches new research project - watch the webinar

Earlier this week, we launched our Pathways research at a fantastic online event with comments from The Children’s Commissioner for England, Dame Rachel de Souza, and Rt Hon Maria Miller, MP. Following the presentation from Ruby Wootton and Damon De Ionno at Revealing Reality (the research organisation which conducted the research for us), the panel took some Q&A. Sadly, we didn’t get to answer all of them live in the time available and we promised to answer as many as we could. 

Before jumping in to those questions, we encourage you to take a look at the full research and the detailed Avatar Annexto understand the methodology fully. 

A recording of the whole event is also available below:

Note: Revealing Reality has realised that they misspoke during the presentation, specifically regarding the auto-fill of proana to proanaa. Ruby stated that the search term of ‘pro-anaa’ was auto-generated after the researcher typed ‘proana’. In fact, the auto-generated link to content via the hashtag appeared after the researcher typed the third ‘a’ not before. We apologise for any confusion.  

The Research methodology 

For safeguarding reasons, I understand that none of the avatars followed other children or friends of the children in the study. Do you think this affected the results in any way? 

As you say, it’s not ethically possible to do this. Ethics aside, it is very difficult to know for sure which other users are children and which are not on these platforms, as not all of them are honest about their age, or who they claim to be. We did create the avatars using follow lists from the real children we researched, and we have no reason to assume that these lists were atypical. If this is the content children are receiving, it may well be the content they are sharing.  

These avatars were registered as children - not as users with an unknown age.  So it’s not really an issue about age verification - it is about how platforms treat those they already know to be children?  

That’s correct. 

Were all your avatars created using profiles under the age of 18? Or were any created based on ‘adult’ birthdates?  

The avatar annex to the report has full details of the methodology. We did create ‘adult’ avatars for control purposes. The only notable difference between the content served to adult and child avatars was the paid for advertising they received.  

From a research perspective, are there any legal considerations around using avatars on social media apps (i.e. in the context of ‘fake accounts’)? How did you overcome these?  

The creation of ‘fake accounts’ is against the guidelines for most of these services, however it is clear that many accounts are ‘fake’ and this does not appear to concern the service owners. They allow these accounts to message adults and minors and presumably sell advertising targeting them. Each of the avatars was used by a real person to browse the content displayed, so while they were using a false name and age, it is difficult to see what law has been contravened. And of course, many users are using false names and ages.  It’s also worth noting that none of the services have raised concerns with us, in private or public, about the use of avatars. 

The role of education and parents 

To what extent will the Relationships and sex education (RSE) and health education curriculum help to address some of the issues? What can/should be done to support children by raising awareness of these issues from an early age? 

Digital literacy and education certainly has a role to play, but it should give children the ability to understand the common uses and abuses of technology and the risks created by the design and operation of platforms and services, not only the harms created by ‘bad actors’. While we strongly support the call for more digital literacy, it will never be sufficient to rely on online safety education to keep young people safe. It is inappropriate to try to educate young people to use a world which systematically asks them to act beyond their maturity and puts them at risk, and it is dangerous to make them responsible for aspects of design over which they have no control. 

A new RSHE curriculum was introduced in September 2020 that schools are now required to follow. The guidance does refer to online relationships and internet safety, but digital literacy makes up just 1 core module out of 8 for both primary and secondary school under the umbrella of ‘Physical health and mental wellbeing’ and 1 core module out of 5 in the vein of ‘Relationships education.’ This does not constitute meaningful provision and is out of kilter with the impact of technology on young people’s lives. Empowering children to be responsible actors in the digital world is fundamental, but neither they, nor their parents, should be made responsible for mitigating risks that need to be addressed at the level of system design. 

The Design of the services and the role of regulation 

Do the platforms do anything to protect known child users that they don’t do to protect adults accessing the same content?  

Ahead of the Age Appropriate Design Code coming into force this September, platforms and services have started to provide additional protections for users registered as children. Some are disabling or restricting more risky features, such as live streaming and direct messaging, other are introducing ‘sensitive’ content filtering, or they have adapted their default settings to provide greater levels of privacy and safety by design. However, children are still being exposed to the kind of material we see in the Pathways report. The efficacy of these provisions is also entirely dependent on services knowing the true age of their users with a level of certainty that allows them to put the appropriate protections in place. 

What category of instruments are most likely to be effective as guidance/controls on what is permissible in design of online services and how would we know? 

The Online Safety Bill does include requirements for online services to use proportionate processes to protect children from harmful content, but Ofcom must be given the powers and resources it needs to have proper oversight of how services design their systems. 

Why can’t the programmers block the inappropriate content? 

There are no minimum standards for training moderators, and they can often be overwhelmed by the scale of the inappropriate content they are expected to identify and take down. Some platforms make use of automated moderation, but over-relying on these systems risks capturing only the most clear-cut cases. Minimum standards for training and protecting the welfare of human moderators would help. 

Why was the advertising shown to children all age-appropriate? Is that because advertising regulation is currently more advanced than that for online harms? 

The Online Safety Bill will place more responsibility on platforms to address harmful content on their services, but much of the content directed to children in the Pathways report already contravenes the platform’s own terms of service, and so comes back to the need for minimum standards for moderation and addressing how content is spread and targeted to users by platforms’ algorithms. 

Gambling websites once had had hundreds of staff dedicated to generating business and very small teams dealing with “responsible gambling”. They have reformed by ensuring that all the marketeers were treated as part of the safer gambling team. This is eerily similar; the algorithm writers need to child safety in their job objectives.  

Interesting example! We agree that a culture of online safety for children should be embedded throughout these services, with all teams taking responsibility for spotting and addressing risks to children. 

Thank you for the presentation. What an impact…. What can we do tomorrow? Changing the revenue model? 

The digital world is entirely man- and woman-made, engineered and for the most part, privately owned. We heard time and again in this research that designers could design for safety, but their companies require them to design to maximise time spent, maximise reach and maximise activity. There needs to be sea change across the industry, led from the top and mandated through better regulation, that results in children being taken out of the business model. Better regulation must require greater transparency and accountability from companies, and named company directors must be liable for platforms and services that put children at risk. 

There is no silver bullet for changing systems. What things need to change - ethical design charters, informed adults (parents, social workers, teachers) and what else can we do to change this? 

There is always a role for parents, guardians and teachers in preparing their children for the risks of the online world, but that can’t be the only line of defence. Online services likely to impact on children should be required to meet minimum standards for protecting children, standards which should be mandatory and enforceable by Ofcom as the independent regulator.