The consultation of the Online Harms White Paper closed back in July 2019. Last week the Government published its initial consultation response, promising a fuller response this spring.
So, what did the interim response reveal about the potential direction of travel for Online Harms? And what does this mean for the children and young people eagerly awaiting this legislation?
1. Companies will need to say what they do and do what they say
The interim response makes clear that technology companies will be required to make their terms and community standards clear and accessible under the new legislation. But clarity is just one ingredient for a safer online environment.
As 5Rights Foundation has advocated, there is not much use in saying what you do/do not allow on your service if these rules are not enforced. The Government has taken note. Here’s what we proposed in our report, Towards a Safer Internet Strategy over a year ago in January 2019:
“A regulatory backstop for community rules would offer a powerful way of changing the culture of the online world. This would allow companies the freedom to set their own rules as they wish, but routine failure to adhere to their own published rules (including content or conduct standards, terms and conditions, age restrictions and privacy notices) would be subject to enforcement notices and penalties.”
And here’s how it looked in the Government announcement last week:
“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently… Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently.”
The need for this is clear. Just this week, it was reported that pro-eating disorder content continues to be recommended to users on TikTok, despite being banned in the site’s community guidelines. The Government’s response is made all the more meaningful given the specific attention given to children and young people;
“All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content.”
This is a significant step, defying the norms of a digital world where typically children and young people remain an afterthought in policy, legislation, and service design. Making it onto the agenda is just one step: coherent guidance and regulation is essential.
Existing attempts at self-regulation among digital service providers make clear the need for clarity. Typically, action taken by platforms is after harm has occurred, and solutions are piecemeal changes to user policies which are insufficiently enforced. As tech giants struggle to tackle inappropriate content and behaviour online, so the familiar pattern of noise and no action continues.
2. The wait for legislation continues…
July 2020 will mark a year since the Online Harms consultation closed and well over two years since the green paper that preceded it. This ‘initial’ response suggests further delay, despite the desperate need for action. Even the promise to publish a full response this spring has been met with scepticism.
This scepticism (and frustration) is understandable given there is still a lot to be fleshed out. What harms will be covered by the duty of care? What powers will Ofcom have to enforce this? Will specific resource and expertise be invested in focusing on children and young people’s digital lives? All questions to which we might have expected answers by now.
3. Interim voluntary codes on Child Sexual Exploitation and Abuse (CSEA)
Prior to formal legislation, the Government says it will publish an interim code of practice relating to CSEA content and activity in ‘the coming months’ (another thing we have to wait for). The focus on CSEA is welcome, but as well as being ‘interim’, the codes will be voluntary. The severity of CSEA and the increasing scale of the problem demand a far tougher and more urgent approach than this.
4. Ofcom “minded” to be the regulator
One encouraging indication that the Government is looking to move things forward is its announcement that Ofcom could be the regulator. (“Could be” because, in keeping with its reluctance to make any firm decisions, the Government teasingly said that it was ‘minded’ to appoint Ofcom). This is despite the fact that consultation responses strongly favoured (62%) a new public body to act as regulator, rather than an existing one. In any case, Ofcom’s appointment looks likely, and their “**proven track record of experience, expertise and credibility” does make them a seemingly natural fit for the role.
The response also gives enough detail to allay fears that Ofcom/the Government will be ‘censoring the internet’. Placing the responsibility for the risk assessment and moderation of user generated content with the technology companies themselves moves Ofcom, and consequently Government, away from regulating individual pieces of content, and towards regulating system design and processes.
There are however some questions that remain unanswered in the interim response, not least the scope of services that will need to comply with the new regulatory framework and Ofcom’s resource and powers to enforce this.
5. What’s happening with age verification?
It is not surprising (or unwelcome) to see age verification on the list of suggested methods to protect children online, since the Government’s abandonment of part 3 of the Digital Economy Act in October 2019. But here, as elsewhere in the response, it is not abundantly clear how high up the agenda this is or whether age verification tools will be a requirement for services in scope.
The initial response states that ‘(c)ompanies would be able to use a number of methods to protect children, including possibly - but not necessarily - age assurance tools, which we expect will continue to play a key role in keeping children safe online.’ Could we ‘possibly- but not necessarily’ read this as a nod to a lack of commitment to age verification online? As with everything else, we will have to wait and see.
The digital environment was conceived as an environment for adult users. Not even its inventors thought it might one day be a place where childhood would be spent. Nor did they make any design concessions for child users. Each day, 170,000 children and young people go online for the first time. The technological and regulatory landscape is changing, and more pace and urgency with Online Harms is necessary if the UK is genuinely committed to being ‘the safest place in the world to be online.’
Balancing regulation alongside innovation, privacy alongside safety, and content moderation alongside freedom of expression are among some of the most difficult questions facing the digital world. But these questions are not unanswerable or impossible to resolve through technological innovation. Particularly for children and young people, rights to privacy and protection are unnecessarily positioned as an either/or proposition. The recent debates surrounding end-to-end encryption provide a clear example of this.
In this interim response, the Government appear mindful, if not cautious, to find a balance that keeps business happy and users safe. Until now, children, young people and other vulnerable users have come secondary to the economic imperatives of technology companies. Online Harms promises action on redressing this balance, but it will only be possible to assess if this is achieved with the best interests of children and young people as more policies are announced.