The Ban That Moves Children Underground
What happened
A survey by the Molly Rose Foundation found that two-thirds of Australian 12-15 year olds are still using platforms including Instagram, TikTok, YouTube, X, and Reddit, despite Australia's law mandating age verification and imposing large fines on platforms that fail to enforce it. Platforms must verify user ages but the methods are easily circumvented. Despite this evidence, the UK is now actively considering an identical under-16 ban, with Prime Minister Keir Starmer indicating the government is not afraid of confronting tech companies. Massachusetts legislators are simultaneously moving toward what they are calling the most restrictive social media age law in the United States.
Countries are adopting a policy that the first adopter's own data shows does not work, because the political value of performing concern about children outweighs the policy value of actually protecting them.
The Hidden Bet
The ban's failure is a technical enforcement problem that better age verification would fix
Teenagers are using VPNs, borrowed accounts, and parent credentials. The enforcement problem is not technical but social: teenagers with phones and motivated peers will find a workaround regardless of the verification method. No technical barrier survives adolescent social motivation.
Removing teenagers from major platforms reduces the harms those platforms cause
Bans push use underground, away from algorithms and into unmonitored Discord servers, private group chats, and less-regulated platforms. The harmful content moves with the user; the transparency does not.
Parents support the ban because it is working for them
Australian parent surveys show majority support for the ban, but parents also report their own children are still on the platforms. The support reflects desire for something to be done, not evidence that this particular thing is working.
The Real Disagreement
The genuine tension is between two honest accounts of government capacity to regulate tech platforms. The first says: government has the power and responsibility to force platforms to change their products, and age verification is the minimum entry point. The second says: platforms are genuinely unregulatable at the enforcement level government would need, and the bans drain political energy from interventions that could actually work, like algorithmic transparency requirements and liability for design choices. The first position has moral force and votes. The second has the evidence. The right policy is probably to regulate the product design, not the user population, but that approach is less legible as protection for children and therefore harder to pass.
What No One Is Saying
Meta and TikTok have not seriously opposed these bans. A law that creates the appearance of regulation without requiring them to change their products or algorithms is the best possible outcome for them. The platforms are not fighting the bans because the bans protect the platforms.
Who Pays
Teenagers in low-income households
Immediately upon enforcement
Age verification systems work better for families with government-issued ID documents and credit histories. Children without those anchors are more likely to be denied access to legitimate channels and driven to unmonitored alternatives, while wealthier teenagers use parent credentials or VPN services.
Children already experiencing online harassment
Within 6 months of enforcement in any new jurisdiction
Bans reduce the platforms' visibility into patterns of coordinated harassment. If children move to private channels to evade bans, their harassment also moves out of reach of the platform moderation systems that are, imperfectly, already watching.
Tech platforms in the medium run
2 to 5 years as regulatory fragmentation compounds
If bans proliferate, platforms face a patchwork of incompatible age verification requirements across jurisdictions, compliance costs that favor incumbents, and potential exclusion from markets where they have strong user bases. The short-term benefit of weak regulation becomes a medium-term regulatory nightmare.
Scenarios
Copy-paste failure
UK and Massachusetts pass bans modeled on Australia's law. Three years later, compliance surveys show similar 60-70% failure rates. The policy debate restarts with the same evidence and the same conclusion.
Signal UK or Massachusetts age verification systems launch with the same technical architecture as Australia's
Product liability pivot
A major court ruling or congressional testimony shifts the regulatory frame from age verification to design liability. Platforms face lawsuits over algorithmic choices that cause harm, and settle by implementing feed chronology requirements, social graph limits for minors, and content filters. Teens stay on platforms but the product is structurally different.
Signal A US federal court allows a product liability case against a social platform to proceed past motion to dismiss
VPN and workaround normalization
Bans proliferate globally, VPN use by teenagers becomes standard, and the bans effectively stratify access by technical sophistication. Governments declare victory, harm data remains flat or worsens among high-risk groups.
Signal VPN downloads by under-18 users in Australia or UK spike within 60 days of enforcement
What Would Change This
If a jurisdiction implemented age verification combined with algorithmic transparency requirements that made the feed experience for minors genuinely different, and a randomized study showed reduced harm, the evidence would shift. The current bans test age gatekeeping in isolation; no one is testing whether the product design changes are the variable that matters.