silverguide.site –

This week, it was revealed that despite the Australian government’s world-first teen social media ban, around seven in 10 children remain on major platforms. What’s more, the eSafety report also shows that there has been no notable change in cyberbullying or image-based abuse reported by children.

For a policy that was touted as the solution to keeping kids safe from harm online, this is a damning indictment of the ban’s effectiveness.

Who could possibly have predicted that this wasn’t going to work? Well, lots of people.

Countless experts were ignored, including those in the fields of digital wellbeing, digital rights advocacy, youth mental health and more than 140 academics and 20 Australian civil society organisations. Even the eSafety commissioner herself had doubts, and internally the government was aware of a lack of evidence to support the ban, before they passed the legislation anyway.

I’ve written in the past about some of the ban’s troubles, including the problem of ignoring many young people’s experiences of being online, and the poor policymaking process behind the ban. But there is very little joy in saying “I told you so” when the outcome leaves children - and indeed everyone - worse off.

The fallback argument for the social media ban is that it’s better than nothing. But with results like these, it may be worse than nothing, given it potentially creates new problems. Children will remain online with arguably less supervision and support; new privacy and digital security vulnerabilities seem to have appeared and the worst aspects of social media lay largely unaddressed.

In response to the eSafety commissioner’s report, the Australian government has accused tech firms of not following the rules and investigated them for non-compliance.

But let’s be real: this approach was never likely to work.

At first glance, this can be seen as simply a matter of compliance and enforcement: tech companies do appear to be skirting their new responsibilities (and really, can we be surprised?), but we should also remember that this approach has always been fraught with problems. The worst part of the ban has always been that not only would it be ineffective, but that it would actually result in making people less safe online. Facial age estimation software is woefully inaccurate, but more watertight approaches to age-gating creates new opportunities for privacy and digital security vulnerabilities – take for example the exposure of approximately 70,000 government ID photos when Discord’s age-verification provider was hacked last year.

Ultimately, the fundamental problem with age-gating is that it fails to address any of the root problems with our current online landscape – that is, the extractive business models and pernicious design features of mainstream tech companies. We all exist in a highly commercialised information ecosystem, rife with algorithmically amplified misinformation, scams, harmful content and AI slop. Children are particularly vulnerable to these issues, but the reality is that it impacts everyone, even if you’re blissfully absent from Facebook or Instagram.

Not only is the social media ban working just as predicted (that is to say, it’s not); what other, more effective alternatives might the Australian government have pursued while spending the better part of two years chasing this red herring? What if instead of trying and failing to kick kids off social media, we focused our attention on the reasons why being online is so often detrimental in the first place?

If policymakers wish to genuinely reduce harm to young people online, they must take seriously the task of challenging models based on behavioural advertising, profiling, and problematic algorithm-driven feeds. The digital duty of care that is being considered by the government may be one avenue to pursue this more meaningfully. And to the other countries looking to follow in Australia’s troubled footsteps: heed our experience as a warning. Banning children from social media is a blunt instrument that risks undermining the very goal of harm minimisation to which the policy purports to aspire.

It’s undeniable that the Australian government wanted to do something. But among many options, they chose to prioritise a course of action that ignored experts, created new risks, and predictably, isn’t working. Bold regulatory intervention is necessary to challenge the power of big tech companies and take seriously the harm caused to children (and, again, all of us). This approach has been doomed to fail from the outset. The only question remaining is: is the Australian government humble enough to admit they got it wrong, and brave enough to try something else instead?

  • Samantha Floreani is a digital rights advocate and writer based in Melbourne