Online harms regulator Ofcom can begin enforcing its Illegal Content codes under the Online Safety Act (OSA) after giving firms three months to prepare.
Published 16 December 2024, Ofcom’s Illegal Harms Codes and guidance went into effect on 17 March 2025, meaning online service providers will now have to comply with its safety measures or face enforcement by the regulator.
These safety measures include nominating a senior executive to be accountable for OSA compliance; properly funding and staffing content moderation teams; improving algorithmic testing to limit the spread of illegal content; and removing accounts that are either run by or are on behalf of terrorist organisations.
Companies at risk of hosting such content must also proactively detect child sexual exploitation and abuse (CSEA) material using advanced tools, such as automated hash-matching.
In the three months since the codes were originally published, firms are expected to have conducted risk assessments of the harms occurring on their platforms, and will now be required to demonstrate to Ofcom how they are tackling illegal harms and proactively working to find and remove such content.
“These changes represent a major step forward in creating a safer online world. For too long, illegal content including child abuse material, terrorist content and intimate image abuse has been easy to find online,” said technology secretary Peter Kyle. “But from today, social media platforms and others have a legal duty to prevent and remove that content. Last year alone, the Internet Watch Foundation removed over 290,000 instances of child abuse content.
“In recent years, tech companies have treated safety as an afterthought. That changes today. This is just the beginning. I’ve made it clear that where new threats emerge, we will act decisively. The Online Safety Act is not the end of the conversation; it’s the foundation. We will keep listening and we will not hesitate to strengthen the law further to ensure the safety of our children and the British public.”
Kyle previously set out his draft Statement of Strategic Priorities (SSP) to the regulator in November 2024. While the SSP are set to be finalised in early 2025, the current version contains five focus areas, including safety by design, transparency and accountability, agile regulation, inclusivity and resilience, and innovation in online safety technologies.
Covering more than 100,000 online services, the OSA applies to search engines and firms that publish user-created content, and contains 130 “priority offences” covering a variety of content types – including child sexual abuse, terrorism and fraud – that firms will need to proactively tackle through their content moderation systems.
Ofcom previously said that it is ready to take enforcement action if providers do not act promptly to address the risks on their services. Under the OSA, failure to comply with its measures – including a failure to complete the risk assessment process within the three month timeframe – could see firms fined up to 10% of their global revenue or £18m (whichever is greater).
Ofcom has also said it will be holding a further consultation in spring 2025 to expand the codes, which will include looking at proposals on banning accounts that share child sexual abuse material, crisis response protocols for emergency events such as the August 2024 riots in England, and the use of “hash matching” to prevent the sharing of non-consensual intimate imagery and terrorist content.
Under Clause 122 of the OSA, Ofcom has the power to require messaging service providers to develop and deploy software that scans phones for illegal material. Known as client-side scanning, this method compares hash values of encrypted messages against a database of hash values of illegal content stored on a user’s device.
Encrypted communication providers have said Ofcom’s power to require blanket surveillance in private messaging apps in this fashion would “catastrophically reduce safety and privacy for everyone”.
Mark Jones, a partner at Payne Hicks Beach, stressed the codes mean the onus is now on firms to demonstrate they are being proactive and accountable in their approaches to illegal harms: “This marks a considerable sea change from only reacting when notified about illegal or harmful content. An appropriate measure needs to be proportionate to the tech company concerned.
“Matters such as the type of service provided, features and functionalities of the service, the number of users and the results of the illegal harms risk assessment are all factors to be taken into account. Some measures apply to all services regardless, such as naming an individual accountable for online safety compliance and ensuring that terms of service and/or publicly available statements are clear and accessible.”
According to Iona Silverman, a partner at London law firm Freeth’s, while firms were given three months to prepare for Ofcom’s enforcement of the codes, there is “no evidence” to suggest platforms have taken any real steps to comply with the regulations.
“On the contrary, Meta announced in January that it was removing its third-party fact-checking, to move to a community notes-style model. Mark Zukerberg openly admitted that changes to the way Meta filters content will mean, ‘We’re going to catch less bad stuff’,” she said.
“The changes were justified by Meta on the basis that they are required to allow free speech. JD Vance’s statement last month that free speech in the UK was in retreat is a nonsense predicated on a personal, political agenda.
“I agree with the British government’s view: that the Online Safety Act is about tackling criminality, not censoring debate. Given the behaviour of online platforms to date, to enable the Online Safety Act to have the intended effect, Ofcom will need to take a robust stance. I would like to see it critically review content and issue substantial fines to any platforms that aren’t taking the steps that are needed to keep people safe online.”
#Online #Safety #Act #measures #effect