Beyond content: Ofcom’s path to being an effective online harms regulator

In a comment piece written for the Times last week, Ofcom’s Kevin Bakhurst reacted to the Government’s announcement that it was “minded” to appoint Ofcom to regulate online harms in the UK. The piece, thoughtfully written, was largely intended to allay concerns about the impact of internet regulation on freedom of speech. As Bakhurst noted, “some internet users feared Ofcom would soon be ‘policing’ the web, shutting down sites and censoring content.”

It is certainly encouraging that Ofcom is alive to the controversies that its new responsibilities will bring, and not surprising given its credentials as an effective and experienced regulator. There are indications, however, that Ofcom will need to adapt its traditional approach to regulation if it is to be fit for the online world.

In setting out the principles “central to our work as the UK’s broadcasting watchdog”, Bakhurst states:

“we never censor content. Our powers to sanction broadcasters who breach our rules apply only after a programme has aired. In fact, the clear, fair and respected code that we enforce on TV and radio acts as a strong deterrent against poor behaviour.”

Though well-intentioned, there are a few issues with this. First, it is not clear that free speech advocates would agree with this definition of censorship. In the context of online platforms, removing content after it has been posted seems no less censorious than preventing it from being posted in the first place.

Second, TV and radio stations tend to be directly responsible for the content they air. The same cannot be said of the platforms in scope of the online harms legislation, which will cover any service “with the functionality to enable sharing of user-generated content, or user interactions.” Sanctioning internet companies for the mere appearance of user-generated content on their platforms would be a lot less reasonable than sanctioning traditional broadcasters for content that they have had a hand in producing or commissioning.

Third, and as a result of the above, online harms regulation should focus less on the post-hoc removal of individual pieces of content and more on the systems and processes that companies have in place to mitigate risk before harms arise. This is important because for regulation to be reasonable and proportionate, it must focus on the behaviour of companies, not the behaviour of users. As Dr Mathias Vermeulen noted in a paper published by the Association for Progressive Communications last year:

“This approach steers us away from the disproportionate attention that is given to the removal of individual pieces of content. Rather than trying to regulate the impossible, i.e. the removal of individual pieces of content that are illegal or cause undefined harm, we need to focus on regulating the behaviour of platform-specific architectural amplifiers of such content: recommendation engines, search engine features such as auto-complete, features such as ‘trending’, and other mechanisms that predict what we want to see next. There are active design choices over which platforms have direct control, and for which they could ultimately be held liable.”

The Government appears to recognise this in its initial consultation response, which states that: “the new regulatory framework will not require the removal of specific pieces of legal content, instead it will focus on the wider systems and process that platforms have in place.” This is encouraging, but what we now need to see from both the Government and Ofcom is greater acknowledgement that online harm is not just about content. That has been the impression so far, and it is certainly the impression given by Bakhurst’s piece (which perhaps can be forgiven since he is the ‘content group director’, after all).

In any case, the Government and Ofcom would do well to note the four broad categories of online risk that 5Rights outlined in our 2019 report Towards a Safer Internet Strategy. Content is just one of them:

  1. Content risks: a child or young person is exposed to harmful material (e.g. pornography, pro-ana material, disinformation, graphic violence etc).
  2. Contact risks: a child or young person participates in an interaction with a malign actor, often, but not always, an adult (e.g. child sexual exploitation, harassment, bullying, phishing etc).
  3. Conduct risks: a child or young person’s own behaviour or activity puts them at risk (e.g. sexting, loss of control of personal data, impersonation etc).
  4. Contract/commercial risks: a child or young person is exposed to inappropriate commercial contractual relationships or pressures (e.g. gambling features, behavioural advertising, persuasive design, misuse of personal data etc).

Regulation must reflect the need to mitigate risks across these four categories, not just the first, if it is to reflect the lived experiences of children and young people online. If the Government and Ofcom recognise this, and insist on the safety of design ‘upstream’ not just the adequacy of response ‘downstream’, the UK has a fighting chance of leading the world in its regulation of digital technology.