Letting social media firms assess personal hurt went ‘staggeringly’ badly
Campaigners warned that allowing social media firms to assess their own harms is unreliable and highlighted recent research showing kids are being bombarded with harmful content online
Not a single tech platform believes they are high risk for suicide or self-harm content, according to a bombshell Ofcom report.
Campaigners branded the finding “abysmal” and called for urgent action to reassure parents. They warned that allowing social media firms to assess their own harms is unreliable and highlighted recent research showing kids are being bombarded with harmful content online.
Ofcom earlier this year ordered tech firms to assess their platforms’ risk to children amid online safety laws coming into force in the summer.
In a report published today, the media watchdog found platforms “inconsistently assessed illegal and harmful content, with common gaps around child sexual abuse and exploitation, and certain kinds of content harmful to children”.
At times, it had to force firms to revisit their risk assessments after having “substantive concerns” about the assessment approach and “concerns about risk level conclusions”. It admitted it still has “outstanding concerns”.
READ MORE: Huge boost for parents as revamped Sure Starts to give specialist support
Alarmingly, Ofcom admitted that few online providers separately assessed harmful content including in relation to suicide, self-harm and hate. And it said many platforms did not “thoroughly investigate” how encrypted messaging might increase risks to users, such as grooming.
Concerns have previously been raised that social media giants have been left to mark their own homework when it comes to sticking to the UK’s Online Safety Act. The Act, which became law in 2023, is enforced through Ofcom’s guidance, including its protection of children codes.
But Ofcom admits providers can use “their own methodologies to come to risk level conclusions”. Its report, which is the first year of online safety risk assessments, said: “We found that many illegal content and children’s risk assessment records provided weak justifications for low or negligible risk level assignments…
“This was most often the case in records where providers assigned negligible or low risk levels to all kinds of illegal or harmful content, using frameworks that appeared designed to come to these conclusions.”
Looking ahead, Ofcom said the services that are most used by children – including Facebook, Instagram, TikTok, Pinterest and YouTube – must provide it “with comprehensive information about their child safety measures, and make timely improvements where needed”.
READ MORE: Keir Starmer slams Reform and Tories’ ‘unholy alliance’ in savage double attack at PMQsREAD MORE: AI chatbots face major crackdown in UK as tech chief warns of new law
It added that it will be issuing formal and enforceable information requests early next year for more information on the steps online firms are taking to keep children safe. It will provide an update in May, including whether any enforcement action or investigations are necessary.
One in two (49%) girls were exposed to high risk suicide, self-harm, depression or eating disorder content on major social media platforms in a single week, research by suicide-prevention charity the Molly Rose Foundation (MRF) found in October.
The MRF was set up after the death of Molly Russell, a 14-year-old schoolgirl who took her own life in 2017 after being bombarded with harmful content on social media. An inquest found social media content “more than minimally” contributed to her death.
And polling by Internet Matters, carried out last month, found more than 70% of parents are concerned about their children coming across self-harm or suicide content online.
An Internet Matters spokeswoman said: “Platforms will obviously not want to admit that this kind of content is prevalent on their platforms, but our research shows children are being exposed to it. This suggests that self assessment by platforms is not reliable in reporting where harms are taking place.”
Andy Burrows, chief executive of the MRF, said: “Ofcom’s abysmal report card will do nothing to satisfy parents that companies and the regulator are taking the threat of suicide and self-harm material with the seriousness it demands. It’s staggering that not one single platform believes they are high risk for suicide or self harm content that is harmful to children, with nothing from the regulator to dispute this absurd claim.
“Ofcom’s enforcement against the largest and the highest risk sites has been woeful and companies will see today’s report as a signal they can do the bare minimum and get approval from Ofcom they are complying with regulation. Children deserve much better from a regulator that is acting with unfathomable timidity.
“We now need Government to act with strengthened legislation that holds companies to account for dangerous products and judges the regulator by how it effectively reduces harm to children.”
Kerry Smith, chief executive of the Internet Watch Foundation, said: “As Ofcom enforces the Online Safety Act, we are starting to see the UK living up to its ambition to be the safest place in the world to be online.
“We strongly believe there should be no safe spaces online for predators to hide. So while we welcome the steps taken by Ofcom so far, there is much more to do, and we look forward to working together to ensure children are given the safety they need.”
An Ofcom spokesman said: “For more than two decades, online platforms have been unregulated and unaccountable. The foundations have been laid this year, and change is happening, but tech firms need to go much further.
“We’ve reviewed more than 100 risk assessments, spanning over 10,000 pages, and told providers to do more work. One major social media company is going through compliance remediation, which may result in formal action if we don’t see sufficient improvement soon.
“We’ve set out the key areas where we expect platforms to step up to make users safer, and we’ll be using our full range of powers if they fall short. So far, we’ve opened investigations into more than 90 platforms overall and fined three providers, and expect to announce more enforcement in the coming months.”
