5Rights responds to the Joint Committee’s report on the Online Safety Bill

Background 

5Rights welcomes the Joint Committee’s report on the Online Safety Bill which was published in December. The Bill has the potential to be a pioneering piece of legislation which would establish new rules to create a safer online world with specific protections for children. 

The Joint Committee was appointed to consider the draft Bill and to recommend improvements before it goes to Parliament in Spring 2022. After 12 weeks of scrutiny, in which the Committee heard from no fewer than 50 witnesses and received over 200 written evidence submissions, the Committee set out its recommendations to Government in a hefty 192-page report. The Government is expected to respond to the report in February, after which a revised Bill will be brought to Parliament in what will certainly be an eventful few months of debate. 

The recommendations made in the report have gone a long way in addressing some of the concerning gaps in the draft Bill. From addressing cross-platform risks and debunking the idea that small services are safe services, to establishing clear enforceability, the report is extensive in its proposals. 5Rights strongly advocates that these recommendations be accepted as a comprehensive package if the Government is to deliver a Bill that is worthy of its name: taking a ‘pick and mix’ approach, adopting some and not all of the recommendations, will not be enough to deliver the digital world that young people deserve.  If implemented collectively, these recommendations will simplify and strengthen the Bill, which will ultimately lead to better protection for children in the online world. Below are six issues 5Rights considers to be vital elements of the comprehensive approach needed in the Bill. 

Protecting children wherever they are online  

As 5Rights stressed in its evidence to the Committee, children need protection wherever they are online, not only in places that are directed at or designed for them. The Committee responded with a recommendation that regulated services within scope of the Bill should include all those that are ‘likely to be accessed by children’, regardless of whether or not they fall under the categories of ‘user-to-user’ like Facebook and TikTok or ‘search services’ such as Google. This would ensure regulatory alignment with the Age Appropriate Design Code, to which all services likely to be accessed by children already have to adhere, without this there is a danger that the OSB will both fail to protect children online and/or undermine the important advances made as a result of the AADC.  

Despite the Government’s promises that the Bill would fulfil the objectives of Part 3 of the Digital Economy Act, which would have required commercial pornography providers to introduce robust age verification, there is no such provision in the draft Bill. A poll by NSPCC’s Childline of 12-13-year-olds found that one in five had seen pornographic images that had shocked or upset them.[1] The insidious effects of this content on children have far reaching consequences, from the normalisation of sexual violence and aggression towards women to questioning the need for consent. This is corroborated by research commissioned by the BBFC which found that 29% of children who intentionally viewed pornography believed that consent wasn’t needed if “you knew the person really fancies you”. In comparison, only 5% of children who unintentionally viewed pornography agreed with this view.[2]  

In recommending that all services ‘likely to be accessed by children’ are subject to the legislation, the Committee set out a way in which pornography providers would be brought into scope. 5Rights supports the Committee’s recommendation that a corresponding, binding code of practice is issued by Ofcom. This would ensure high-risk services like adult sites operate robust and privacy-preserving age assurance so that children cannot access content that they would not be able to legally access offline.  

The need for Age Assurance 

Without proportionate mandatory and accountable age assurance, children online are exposed to content, behaviours and pressures that they may not have the developmental capacity to navigate. The minimum age on most social media platforms is 13, yet research shows that 42% of children aged between 5- and 12-years old use social media.[3]  

When asked whether digital service providers could resolve this issue, Frances Haugen told the Committee:   

“Facebook could make a huge dent in this if it wanted to. It does not, because it knows that young users are the future of the platform and that the earlier it gets them, the more likely it will get them hooked.” 

Children do not self-regulate as effectively as adults. They risk having their cognitive development hampered by problematic use and addiction - behaviours exacerbated by common design features and nudge techniques, such as infinite scroll and gambling-style reward mechanisms. Children are also able to easily access content such as violent films or pornography that they would not be able to offline. 

We were delighted to see that in direct alignment with 5Rights’ recommendation, the Joint Committee’s report outlined that Ofcom must set binding minimum standards for proportionate, privacy-preserving age assurance mechanisms in advance of the Bill and with an acknowledgement of children’s rights of freedom of association, participation, and information, as well as the right to protections.  

Protections for children cannot be enforced unless children are identified on these services. Companies have so far turned a blind eye to children on their digital services. Mandated age assurance will close this loophole in the Bill.  

Shifting the focus to Safety by Design  

The risks and harms children encounter online are well documented, ranging from exposure to hateful content and disinformation to unsolicited contact by adult strangers and grooming. Irrefutable and often alarming evidence was presented to the Committee on the extent of the risks children face online and the devastating consequences they can have on a child’s psychological and physical wellbeing. Among teens who reported suicidal thoughts, 13% of British users and 6% of American users traced the desire to kill themselves to Instagram.[4]  

Concerningly, many social media services repeatedly recommend harmful content to children by means of their engagement-based ranking systems. Evidence presented to the Committee from a study which tracked 147 leading anti-vax accounts showed they gained 10.1 million followers since the end of 2019, with the predominant growth in following happening on Instagram and YouTube.[5]  The spread of anti-vax content was mirrored by that of self-harm, pro-suicide and disinformation. This growth is a direct result of accounts being amplified by recommendation systems configurated to increase the amount of time users spend on the service. Simply put, these service providers have designed systems which actively amplify content irrespective of whether it is hateful and dangerous in order to boost engagement. Given these systems are designed in, they can and must be designed out.  

The recommendations of the Committee for Ofcom to produce a Safety by Design code of practice will help minimise the frictionless sharing of content at scale, the use of fake accounts and bots, and require the redesigning of algorithms that create risk. 

Very often the kind of harmful material that is amplified by algorithms designed to boost engagement, is also in direct breach of a service’s terms and conditions. From the lack of action on discriminatory and racist posts to antisemitic content, the evidence presented to the Committee was overwhelming. Facebook, Instagram, TikTok, Twitter and YouTube failed to act on 84% of user reports of clear antisemitic content, in blatant disregard of their own terms and conditions. Despite Facebook’s ‘strong’ policies on antisemitic conspiracies and Holocaust denial, it performed the worst, removing or labelling just 18% of posts.[6] These service providers have the resources and ability to improve their content moderation. What they lack is the will or requirement to do so.  

5rights welcomes the Committee’s emphasis on ensuring safety by design through targeting the systems and practices such as the algorithms and recommendation systems which amplify and expand the reach of harmful content online. 

Beyond harmful content  

The inclusion in the report of 5Rights’ recommendation to focus on harmful ‘content and activity’ recognises that harm can be caused not only by content but by inappropriate commercial pressures, friend recommendations that introduce children to adult strangers, and features such as livestreaming. We know these risks exist already, and will only become more prevalent as digital technologies evolve beyond the screen to spaces such as the Metaverse. The range of these risks cannot be captured exclusively under the umbrella term of ‘content’, but would be covered with the added caveat of harmful ‘activity’.  

The Committee received striking evidence of the hypermonetisation of ‘teen apps’ which are three times more likely than general audience apps to support in-app purchases.[7] This - in combination with children’s proven inability to accurately identify targeted advertising - makes them disproportionately vulnerable to economic harms and fraud.  

The recommendations to include harmful content and activity is a clear strength of the Committee’s report which, if accepted by Government, will enhance protections for children in the Bill. 

Ofcom setting the bar 

Enforceable minimum standards are the bedrock of effective regulation. Without them companies will certainly opt for generous interpretations of the Bill, effectively making up their own rules. If companies do not enforce their own terms and conditions, they cannot be expected to welcome external regulation with open arms. Without minimum standards, these companies will likely lean towards tick-box exercises or heavy-handed parental controls rather than investing in meaningful safety by design solutions.  

With the proliferation of end-to-end encryption, companies have shirked their responsibility to moderate content on their digital services. In over 9,000 instances where police in England and Wales were aware of the platform used in child sexual abuse images and offences, 22% were reported on Instagram and 19% on Facebook. Alarmingly, only 3% were reported on WhatsApp, a platform that has end-to-end encryption.[8] Companies should not be allowed to hide behind their selected design features such as encryption to avoid adequately protecting their users. As the report stipulates, these features should inform the risk profiles of companies that Ofcom designs. 

Minimum standards are necessary to ensure the regulations are enforced effectively. In alignment with 5Rights’ recommendation, the Committee’s report states that Ofcom must develop mandatory and binding statutory codes of practice including for child online safety and age assurance with reference to the United Nations Convention on the Rights of the Child and the General Comment No.25 on children’s rights in relation to the digital world. This would establish a baseline level of protection for children which companies have to meet. The report’s recommendations on transparency, criminal liability for senior managers and external redress systems will help ensure these companies are held accountable and deliver on their duties.  

Better treatment of bereaved parents 

Following the tragic deaths of Molly Russell and Frankie Thomas, 5Rights welcomes the Committee’s call to simplify and humanise the process for bereaved parents in search of access to their children’s data. Ian Russell raised apt concerns about the difficulty in granting access to digital data for coroners and to relevant authorities for investigatory purposes. 5Rights agrees with the Committee that no bereaved parents should have to navigate through the arduous process Ian Russell and countless others have had to undergo in order to access the social media of their children.  

What’s next? 

When the Government responds to the Joint Committee’s report in the coming weeks, it should accept the entirety of the Committee’s recommendations, and they should be defended as the Bill makes its way through Parliament. Questions will certainly be raised about the expansion of the scope of services, but the Bill must protect children wherever they are online, from app stores to commercial pornography websites. The risks and harms to children online are not confined to large social media sites or search engines. It would be counterproductive for these regulations to be. 

The efforts to uphold the report’s recommendations must remain steadfast to ensure the Online Safety Bill meaningfully protects children in the digital world. 5Rights looks forward to the Government’s response in February.  

 

[1] “A Year in Review 2014-2015”, NSPCC Cymru/Wales, link, p. 18 

[2] “Young people, Pornography & Age-verification”, January 2020, British Board of Film Classification. link p. 48 

[3] “Children and parents: media use and attitudes report”, Ofcom, 28 April 2021, link.p.25

[4] Georgia Wells, Jeff Horwitz, Deepa Seetharaman WSJ 14 September 2021 link 

[5] This “The Anti-Vaxx Playbook”, Center for Countering Digital Hate, 22 December 2020, link, p. 9

[6] “Failure to Protect”, Center for Countering Digital Hate, 30 July 2021, link p. 8 and 23 

[7] Risky Business: A New Study Assessing Teen Privacy in Mobile Apps”, BBB National Programs, (October 2020) p.11 

[8] “Facebook must be stopped from creating hiding places from child abuse”, NSPCClink,(May 2019).