Blog

Online Safety Bill: Aims and Limitations

Working on Laptop

A lot has been said about the scope and likely impact of the Online Safety Bill, and it is often hyped as some sort of silver bullet that will enable children and other vulnerable people to be kept safe online.  The Bill is very complex and whilst it hopefully will have the impact of, for example, largely preventing children from accessing vast amounts of porn, it is also limited and a lot will turn on the resources and teeth of the regulator, Ofcom.  This article gives an overview of Bill and looks behind the hype.

Background

Whilst criminal law applies to online activity in the same way as offline activity, there is no overall regulator of online activity in the UK, and for content that is harmful, but not illegal, social media platforms self-regulate through “community standards” and “terms of use” that users agree to when joining.   Even though many platforms have a minimum age for use (often capped at 13) there is no age verification system, and previous legislative provisions, which imposed age verification requirements, were never brought into force.

As is well known, the self-regulation model can and does involve material which promotes violence, self-harm or cyberbullying, and include indecent, disturbing or misleading content.  There is an increasing body of evidence of harm being caused to children in particular, by exposure to indecent content (whether legal or not), as well as cyberbullying, revenge porn and sites promoting eating disorders (aka ‘pro- ana sites’) and suicide.  Platforms used by millions for posting social content can and do become echo chambers where filter bubbles driven by algorithms result in the user being repeatedly exposed to one side of an argument rather than seeing a range of opinion, leading to widespread disinformation.

The basis of the current system, in which platforms are largely immune from content posted by others, is rooted in section 230 of The Communications Decency Act 1996 –  US legislation which provides immunity to owners of any ‘interactive computer service’ for anything posted online by third parties:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

This creates the conundrum: even assuming there are statutory measures to prevent or remove content that is illegal (though query its effectiveness), what about content that is legal but potentially harmful?

For example, 88% of all pornographic material reviewed on a “mainstream” pornographic website – Pornhub – involved physical aggression by men on women – commercial pornography has coalesced around a “homogenous script involving violence and female degradation.  This is not necessarily illegal, but its repeated exposure to curious children and teens is believed to have long-term adverse effects on their psyche, their views of what is ‘normal’, what is acceptable behaviour and their ability to form healthy sexual relationships going forward.  The outcry following the Everyone’s Invited website and Ofsted’s Rapid Review provides a snapshot of how widespread and detrimental these behaviour patterns are, and the young age at which they start being formed.

The Online Safety Bill

The ongoing calls for statutory regulation have been met with the Online Safety Bill published in May 2021, following the White Paper published in 2019.  The Bill is currently at pre-legislative scrutiny: a Joint Committee is looking at the Bill and has taken evidence from tech companies, children's charities and Ofcom.  It was due to report by 10 December 2021 but at the time of writing it does not appear to have done so, and indeed was still taking written evidence on that date.

The Bill has the following core aims: to address illegal and harmful content online (esp. terrorist content, racist abuse and fraud) by imposing the duty of care concerning this content; to protect children from child sexual exploitation and abuse (CSEA) content; to protect users’ rights to freedom of expression and privacy; and to promote media literacy. The Bill designates the Office of Communications (OFCOM), the U.K.’s current telecommunications and broadcast regulator, to oversee and enforce the new regime.

The Bill is in 7 parts as follows:

  1. Overview and key definitions (note overall regulator is proposed to be Ofcom)
  2. Imposition of a duty of care on providers of regulated services
  3. Imposition of other duties on service providers
  4. Ofcom’s powers and duties in relation to regulated services
  5. Provision for appeals and complaints – appeals will be to the Upper Tribunal applying JR principles
  6. Secretary of State’s functions in relation to regulated services
  7. General and final provisions (including an index of terms defined in the Bill – currently clause 138)

The new duties contained in Parts 2 and 3 of the Bill apply to providers of “regulated services”, and they are of two types:

  1. user-to-user services (“U2U”): services such as Snapchat, Facebook, Twitter and other social media platforms where content that is generated by a user of the service, or uploaded to or shared on the service, may be encountered by another user of the service.
  2. “regulated search services”: also known as search engines e.g. Google, Bing, DuckDuckGo.

Providers must have links with the UK, which is defined in clause 3 as having a significant number of UK users or UK users form one of the target markets for the service.

The Bill imposes a “duty of care” to protect users from illegal and harmful content.  A cornerstone of the approach in the Bill is risk assessment followed by duties to mitigate and effective management of those risks.  All providers of regulated U2U services must comply with the following duties:

  • Illegal content risk assessment: see cl7(1)
  • Each of the illegal content duties: see cl9 (illegal content: defined in cl41)

But where the regulated U2U services are likely to be accessed by children, the providers must also comply with the following duties:

  • Risk assessment duties: cl 7(3) and (4)
  • Each of the duties to protect children’s online safety: cl10 which include:
    • the duty to take “proportionate steps to… mitigate and effectively manage the risks of harm to children in different age groups, as identified in the most recent children’s risk assessment of the service”.
    • The duty to preventchildren…from encountering, by means of the service, primary priority content that is harmful to children”.
    • The duty to protect children in age groups judged to be at risk of harm from other content that is harmful to children.

“Content that is harmful to children” is defined in clause 45 as meaning:

  • Regulated content which the Secretary of State has designated in regulations as either primary priority or priority content which is harmful to children;
  • Content is in scope if the provider has reasonable grounds to believe that the nature of the content risks directly or indirectly having a significantly adverse physical or psychological impact on a child of ordinary sensibilities – the explanatory notes say includes feelings such as serious anxiety and fear – as well as medically recognized conditions.
  • Content that may become harmful by its repetition
  • Test is by reference to a child of ‘ordinary sensibilities’ for the relevant age range

Clause 15 is potentially an important provision forming part of the safeguarding system:

it requires the service provider to operate a system that allows users and affected persons to “easily report content” that:

  1. is illegal;
  2. children can access which is considered harmful to children; or
  3. is believed to be harmful to adults

A similar duty of care / risk assessment structure applies to search services, with the relevant duties contained in clauses 17, 19 and 21-24.

The legislation sets out a general definition of harmful content and activity.  The general definition will apply to content and activity where there is a ‘reasonable, foreseeable risk of significant adverse physical or psychological impact on individuals’.  A limited number of priority categories will be set in secondary legislation which will cover:

  • criminal offences, including child sexual exploitation and abuse, terrorism, hate crime and sale of drugs and weapons
  • harmful content and activity affecting children, e.g. pornography and violent content
  • content and activity that is legal when accessed by adults, but which may be harmful to them, e.g. content about eating disorders, self-harm, suicide

The detail is then to be found in the relevant Code of Practice which is to be issued by Ofcom as the regulator. This will set out the actions that companies must undertake to comply with their safety duties in respect of illegal content.  There is an interim voluntary code on Child Exploitation and Abuse (CSEA)[1] which was published December 2020, but a more detailed Code can be expected once the Bill passes into law.

Exemptions/outside scope

There is an extensive list of matters which either fall outside the scope of the Bill or is exempt from it.  For example, ‘regulated content’ in relation to U2U services, means user-generated content, except (clause 39):

  1. Emails
  2. SMS messages
  3. MMS messages
  4. Comments and reviews on provider content
  5. One to one live aural communications
  6. News publisher content (indeed there are specific duties for the purposes of respecting of freedom of expression)

Also, it should be noted that internal business services are exempt – this includes services such as business intranets, content management systems, database management software and collaboration tools.

Impact

It seems reasonably clear from the Bill and the surrounding material that the Bill will not, for example:

  • Prevent sexting, the sending of nude images by SMS/MMS and the associated pressure placed on even primary school children for such images (see e.g. the Ofsted Rapid Review of May 2021 following the Everyone’s Invited website publicity).
  • Prevent trolling, ‘pile-ons’ and abusive emails (see e.g. the type of misogynist abuse received by female MPs of all political persuasion, alongside many other women in the public eye)
  • Protect people from financial scams

However, there is reason to be optimistic that the effect of the Bill will:

  • Reduce the amount of accessible illegal content; and
  • Force some types of providers (e.g. porn sites) to establish much improved age verification systems.

Nonetheless, serious concerns have been expressed regarding the Bill itself, that it “will be catastrophic for freedom of speech of British citizens online in its current form” by forcing tech platforms to delete “harmful” content or face big fines and that this will lead to many lawful posts being deleted without actually making people safer online.[2]

The task of the Bill is a considerable one, but its aims are laudable.

Samantha Broadfoot QC is a specialist practitioner in public law and human rights, with an interest in data protection and regulation.  She is an Assistant Coroner and a Recorder.

[1] https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/944034/1704__HO__INTERIM_CODE_OF_PRACTICE_CSEA_v.2.1_14-12-2020.pdf

[2] https://committees.parliament.uk/writtenevidence/41410/default/

To subscribe to our Health and Social Care Insight and get the blog posts sent straight to your inbox, click here.

Download your shortlist

Download All Download icon