The UK government and online harms regulator Ofcom disagree about whether misinformation is covered by the UK’s Online Safety Act (OSA).
On 29 April 2025, the Commons Science, Innovation and Technology Committee (SITC) questioned the UK’s online harms and data regulators about whether the UK’s Online Safety Act (OSA) is fit for purpose, as part of its inquiry into online misinformation and harmful algorithms.
As with previous sessions, much of the discussion focused on the spread of disinformation during the Southport Riots in 2024. During the session, the SITC also grilled government minister Baroness Jones about the implementation of the OSA, which went into effect on 17 March 2025. However, the regulators and the government took different views about the applicability of the legislation to online misinformation and disinformation.
Mark Bunting, the director of online safety strategy delivery at Ofcom, for example, said that while the OSA contains provisions to set up an advisory committee on disinformation to inform the regulators ongoing work, the OSA itself contains no provisions to deal with disinformation.
During the previous SITC session, in which the committee grilled X (formerly Twitter), TikTok and Meta, each of the firms contended that they already have processes and systems in place to deal with disinformation crises, and that the OSA would therefore not have made a notable difference.
Bunting added that while the OSA does not cover misinformation directly, it did “introduce the new offence of false communications with an intent to cause harm, and where companies have reasonable grounds to infer that there is intent to cause harm”.
Committee chair Chi Onwurah, however, said it would be difficult to prove this intent, and highlighted that there are no duties on Ofcom to take action over misinformation, even if there are codes about misinformation risks.
Jones, however, contended that misinformation and disinformation are both covered by the OSA, and that it would have made a “material difference” if its provisions around illegal harms were in force at the time of the Southport Riots.
“Our interpretation of the act is misinformation and disinformation are covered under the illegal harms code and the children’s code,” she told MPs.
Talitha Rowland, the Department for Science, Innovation and Technology’s (DSIT) director for security and online harm, added that it can be challenging to determine the threshold for illegal misinformation, because it can be so broadly defined: “It can sometimes be illegal, it can be foreign interference, it can be content that incites hate or violence that’s clearly illegal. It can also be below the illegal threshold, but nevertheless be harmful to children – that is captured.”
In the wake of the riots, Ofcom did warn that social media firms will be obliged by the OSA to deal with disinformation and content that is hateful or provokes violence, noting that it “will put new duties on tech firms to protect their users from illegal content, which under the act can include content involving hatred, disorder, provoking violence or certain instances of disinformation”.
Bunting concluded that platforms themselves want clarity over how to deal with disinformation within their services, and that Ofcom will continue to monitor case law developments around how the OSA can be interpreted in the context of misinformation, and update future guidance accordingly.
Updating the SITC on the progress made since the act went into force on 17 March, Bunting said that Ofcom has received around 60 safety assessments from platforms about the risks of various harms occurring on their platforms. These are required to demonstrate to Ofcom how they are tackling illegal harms and proactively working to find and remove such content.
Initially Published 16 December 2024, the risk assessment is the first step to compliance with Ofcom’s Illegal Harms Codes and guidance.
The codes outline various safety measures providers must put in place, which includes nominating a senior executive to be accountable for OSA compliance; properly funding and staffing content moderation teams; improving algorithmic testing to limit the spread of illegal content; and removing accounts that are either run by or are on behalf of terrorist organisations.
Companies at risk of hosting such content must also proactively detect child sexual exploitation and abuse (CSEA) material using advanced tools, such as automated hash-matching.
Ofcom previously said it will be holding a further consultation in spring 2025 to expand the codes, which will include looking at proposals on banning accounts that share child sexual abuse material, crisis response protocols for emergency events such as the August 2024 riots in England, and the use of “hash matching” to prevent the sharing of non-consensual intimate imagery and terrorist content.
#Government #Ofcom #disagree #scope #Online #Safety #Act