This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
viewpoints
Welcome to Reed Smith's viewpoints — timely commentary from our lawyers on topics relevant to your business and wider industry. Browse to see the latest news and subscribe to receive updates on topics that matter to you, directly to your mailbox.
| 3 minute read

UK OSA: Codes and Categorisation

Just in time for Christmas (hurrah) and Ofcom has published its first-edition codes of practice on tackling illegal harms, which has now officially started the countdown before the first set of substantial duties kick in. See our previous materials for more details on who the rules apply to, the key obligations, and the consequences of non-compliance.

We summarise below the key points, changes, next steps, and dates to earmark in your diaries. 

What are the confirmed categorisations?

In the first of two significant milestones under the OSA, the threshold conditions for the categorised services have now been approved

  • Category 1: EITHER: (i) services with content recommender systems and a user base of 34 million UK users; OR (ii) services with content recommender systems, the ability for users to forward or re-share existing content on the service, and a user base of 7 million UK users.
  • Category 2A: Non-vertical search services with more than 7 million UK users on the search engine part of the service.
  • Category 2B: Services that allow users to send direct messages and have more than 3 million UK users on the user-to-user part of the service.

Services will be subject to a spectrum of additional duties depending on which category they fall within, including providing more transparency, enhanced requirements on risk assessments and recording keeping, and disclosure of information about use of the service by a deceased child user. Ofcom aims to publish their register of categorised services in June-July 2025 and the draft proposal regarding the additional duties is expected in early 2026. 

Remember: Not being classified does not of course mean that there are no obligations, just that the more onerous additional obligations will not bite on these services.

Has anything changed on the Illegal Harms duties?

Aside from the huge number of formatting and style changes, there are a few other material updates that have been made since the draft versions in terms of both the risk assessment and guidance and recommended measures. Good news is that there is nothing that would necessitate a complete overhaul of any work done up until this point. However, the changes are a little fiddly.

Key changes include the following: 

1. Assessments and risk profiles  

  • Changes to the types of priority illegal content that need to be assessed:
    • References to ‘serious self-harm’ have been removed, leaving ‘encouraging or assisting suicide’ as a standalone type of priority illegal content.
    • Unlawful immigration’ and ‘human trafficking’ have been divided to create two separate types of priority illegal content.
    • Firearms and other weapons’  has been expanded to include ‘knives’  specifically.
    • Animal cruelty’ has been formally added as an additional type of priority illegal content, bringing the new total to 17 illegal harms. 
  • There is now more clarification around how to make judgments on the likelihood, nature and severity of harm, and emphasis that enhanced inputs should be considered by large service providers and those who have identified multiple specific risk factors for a kind of illegal content.
  • More risk factors being associated with each kind of illegal harm. To name just a few examples, Ofcom considers that social media services are likely to have an increased risk of all kinds of illegal harm (removing the word “nearly”); services with user profiles now also have fraud and financial services, proceeds of crime, foreign interference and hate identified as key kinds of illegal harms; and services where users can form user groups or send group messages are now also matched with more harms including CSAM, hate, fraud and financial services, controlling or coercive behaviour, foreign interference and intimate image abuse. 
  • To help with the point above, however, Ofcom has included a handy table that matches up the specific risk factors in the risk profile with the harm so that it is easier to map.

2. Measures

  • There are now over 40 types of measures to consider. 
  • Removal of the de minimis for file-storage and file-sharing services from the list of services that will need to use hash matching for CSAM.
  • Clarification that it is an “individual” who is accountable for illegal content safety duties and reporting and complaints duties. 
  • Introduction of a new measure on all services to have a content moderation function “to review and assess suspected illegal content” (as opposed to one which allows for the swift take down of it). 
  • The removal of the measure to use keyword detection technology to analyse whether content is fraudulent.
  • Certain services will now need to send further information about how complaints will be handled and may also need to offer an opt-out from communications following a complaint. There is now, however, a new exception for manifestly unfounded complaints. 

See also a table for the simplified final version of the recommended measures

What are the next steps and the timeframes?

Lots of paperwork and most likely some technical changes will be required depending on the results of the risk assessment.

The publication has triggered the start of the 3-month period for services to complete their illegal harms risk assessments (i.e., until 16 March 2025) and services must start implementing measures to mitigate these risks from 17 March 2025. Ahead of these deadlines, Ofcom has pre-emptively warned of its readiness to act against non-compliance. 

See also our dedicated web page to keep on top of the latest announcements and developments.

Tags

entertainment & media