‘I’m a web-based security skilled – social media wants overhaul however a ban is simply too far’
The Government could easily go too far trying to protect children online with changes to the Online Safety Act argues Rebecca Whittington, The Daily Mirror’s Online Safety Editor
Social media should not be banned for children. Instead, we need a system overhaul.
The government’s announcements to tighten up parts of the Online Safety Act are a positive step forward. Finally, we are seeing some speedier actions to challenge fast-changing developments in technology. But, I do not think a full ban on social media for children under 16 is a good idea.
As a parent of teens and tweens, I know how tempting it would be to have someone other than myself forcibly remove access to parts of the internet which could otherwise cause damage and distress to my children and their peers.
READ MORE: Prince Harry and Meghan Markle speak out after UK government social media law changeREAD MORE: Bereaved mum makes urgent plea to Keir Starmer to stop kids dying like her son
As an online safety expert, I know from the waters I swim in every day how absolutely rotten to the core some parts of the internet can be. I know how the tendrils of harm can wrap their way around the hearts, minds and souls of some of the best people, making them believe things that are not true or making them fearful, despairing and desperate.
Believe me. There have been more times than I can count that I have wished we could put the genie back into the bottle and return to simpler times when phones were in the kitchen, attached to the wall with a cable and where everyone in the house could overhear the chat.
But, even with all this knowledge, perhaps partly because of it, I don’t think we should ban social media for children under 16. There are several reasons for this. Let me explain.
I’ll start with the platforms. Big tech is more powerful than any government. The wealth and size of Meta and Google, who collectively own more than 80 per cent of the global market share of social media platforms, is comparable to the GDP of nations such as Australia, France and Mexico.
Now, this doesn’t mean they should not be governed by the rules of the nations in whose playgrounds they choose to frequent. But it does mean that if they choose to, almost every punishment for bad behaviour can be pretty much ignored.
This is a problem therefore that cannot be solved through legislation alone. The Online Safety Act demonstrates that by creating rules for big tech to follow, we essentially create frameworks of ‘right and wrong’ which can be used to reframe responsibility.
The platforms find ways to slide around the lines drawn by legislation. And they point to the rules as the reasoning behind other behaviours being allowed.
A real example of this would be when it was found that Grok, the AI tool available to users on X (previously Twitter), was being used to create deepfake sexualised images of women and children, the company took a while to respond. After removing some capabilities to non-subscription users, X only finally bit the bullet when the UK threatened to block the platform entirely.
To announce the change, X said in a post that it had stopped Grok from altering pictures to remove clothing, in “jurisdictions where such content is illegal”. Reading between the lines, that clearly suggests that if someone in a different jurisdiction decides to make deepfake sexualised content based on an image of a UK resident, there is nothing to stop them from using Grok to produce that image.
It also suggests that the resulting image will still be available to all users to see. There is also nothing to stop UK users from producing other kinds of deepfake images of real women and children in degrading, threatening or humiliating situations, so long as they have not had clothing removed.
Essentially, the law has created a line and X is colouring right up to the edges. By determining what we see as ‘right and wrong’ our laws are writing a playbook which then absents responsibility from the platforms to regulate themselves.
In terms of how this then applies to under 16s, we just need to look at how age verification under the new(ish) Online Safety Act is being applied. Now that children have to be protected from certain types of content, platforms have to verify the age of users. They do this in a number of ways; through facial recognition, document scanning, credit card details or database checks.
It’s not difficult to circumnavigate these checks – one quick Google (ironically) tells us all we need to know. But now, the platforms can point to the checks and disavow any responsibility for underage users who have gamed the system.
I realise, in a fair society and for a system to work, we all need to play by the rules. If a child is going to play with matches, they might find they get burned. As adults and parents, we have to make sure our children are not exposed to harm and that the matches are placed out of reach. Our government is also right to produce legislation to guardrail and create templates for regulation.
But if the Online Safety Act was working, why are we talking about banning children entirely from social media? And if we over-legislate, does this not further remove responsibilities from the platforms themselves to determine what is and is not acceptable?
This takes me to my second point. What is social media? In Australia, the new ban does not include WhatsApp, for example. If I think about my own teenager’s use of smart technology, WhatsApp was actually the worst when at the start of secondary school, whole classes created groups in which hundreds of messages pinged each child all day.
Aside from the bullying which quickly started to happen (children uploaded ‘funny’ pictures of one another, taken without consent during the school day – of course, some children were featured more than others and quickly became the butt of the joke), the deluge of content was overwhelming. Then there were the huge unregulated WhatsApp groups which children added one another to, in which strangers would send messages to members, some of which turned out to be explicit sexual content. As parents we had to quickly reassess our own rules and have talks with our child about the need for personal boundaries and the risks of being in large groups online.
Would the new ban include chatbots, for example? Would it include YouTube? Or Reddit. How about the websites and forums set up to amplify harmful behaviours such as disordered eating or self harm and suicide? How would this then work in terms of AI functions? How would it work for new developments? And how would it help our kids become digitally literate, critically thinking adults?
And what about trust? By banning social media, are we not at risk of pushing our kids into a situation where young people use platforms secretly, meaning they are less likely to talk to adults in their lives about harmful content or events because of the risk of being caught?
By bringing in a ban, I believe we would be creating an unregulated black market in which our children face the same risks as they do now, but in which platforms take none of the responsibility for the harm suffered or the consequences. Finally, if we choose an age to ban a child, this still creates many of the problems often faced by kids and parents now, but it pushes back the timing.
To be fair, I get that at 16 children are more capable and mature than at 11 or 12, when many get access to their own smartphones. Children aged 16 will soon be able to vote and they can sign up to fight for our country. But 16 is also the age when most teens will be taking their GCSE or equivalent exams. By withholding the addictive power of social media until this crucial age, are we not doing our kids a disservice? By waiting until a time when they need to be able to focus on the big choices facing them and then unleashing addictive tools which are proven to create distraction and damage attention span, are we not setting them up to fail?
To be clear, I don’t feel at all comfortable with the social media and smartphone world we live in now. I think it is damaging, dangerous and harmful to people of all ages. But I can’t help but feel that by introducing a ban, we fail to protect our children properly and we take all the responsibility for the harm being done. As proven, if changes can be made to remove nudification capability within a couple of days, then platforms could choose more widely to be safer and better and to protect women, children and all users from harmful content and events. It also proves that the only way to have platforms take responsibility, is to genuinely threaten an entire ban. Not just a ban for a certain demographic or age group.
If our government is to make change, it should join forces with other large nations and agree to block platforms for all users unless they clean up their act across their global services. Until we push back on these powerful companies and hit them where it really hurts, which is in the wallet, they will never mark their own homework.
