Australia has just enacted one of the most important reforms in the international movement to protect kids from the industrial-scale harms caused by social media companies. Today, Australia’s Online Safety Amendment, which raises the age – from 13 to 16 – at which children can sign a contract with these companies (the terms of service agreement), officially came into effect. Now, children will still be able to view content on YouTube, TikTok and most other platforms without an account – just as adults can. However, they will have to wait to agree to give their data away and expose themselves to manipulative design.
This policy has the support of parents in Australia and around the world. It’s popular because most parents don’t want their children using social media, yet many feel that they have no choice: if they hold the line and keep their kids off while everyone else’s kids are on, then their kids will be isolated.
Illustration by Simon Letch Credit:
A common criticism of the policy has been that it is a ban so severe that it will block children from watching videos on YouTube and teachers from using YouTube videos in their classes. This is false; contrary to their claims, the law does not block kids from accessing content. In the words of the eSafety Commissioner, “It’s not a ban, it’s a delay to having accounts.” This distinction matters. When a user creates an account, they enter into a contractual relationship with a platform and authorise a company to collect data, personalise an infinite feed around their behaviour, push notifications designed to capture their attention, expose them to direct messages from strangers, and incentivise them to stay online far longer than they intend.
Much like a 13-year-old child cannot sign up for a credit card, this policy change clarifies that children should not be locked into digital contracts. Developmental science – and common sense – tell us that children struggle to weigh short-term rewards against long-term costs. A design that exploits this imbalance should be off-limits. Legislators around the world are cheering on Australia as similar policies are considered around the world. Social media companies will be eager to seize on any issues that arise during the rollout. Given this predictable tactic on their part, a few points are worth keeping in mind as this monumental legislation comes into effect.
A common critique of this sort of legislative move is that determined kids will find workarounds. Some will. That has always been true. But the goal here is not perfect enforcement; it is shifting the default environment so that children are not pressured into digital spaces they don’t actually want because they don’t want to be left out.
Loading
In one study from the University of Chicago, teens valued social media only because others were using it and they preferred (and were willing to pay for) a world where nobody uses social media. We work with numerous youth groups which recognise the perils of the phone-based childhood and want solutions. Many describe regret, anxiety or the sense that they would quit if everyone else would quit too. Behavioral economists studying teenage platform use reach the same conclusion: many adolescents are stuck in a collective-action trap – a situation where individuals keep doing something primarily because they fear being left out if they stop.
This is why policy matters. Children and families should not have to fight this battle alone. Responsibility lies with platforms, whose design and data practices currently shape children’s daily experiences far more than any individual family can counteract.
Children should still be able to have wide access to information online and this law will protect childhood without restricting access to information. Requiring children to use platforms logged out dramatically reduces the most manipulative or developmentally risky design features including direct messages, personalisation, notifications, behavioral profiling, targeted advertising and unwanted contact.




