← April 26, 2026
society decision

The World Is Copying Australia's Teen Social Media Ban. Australia's Is Already Failing.

The World Is Copying Australia's Teen Social Media Ban. Australia's Is Already Failing.
Yahoo / Fortune

What happened

Australia's social media ban for users under 16, enacted in December 2025 and the world's first such law, has produced widespread non-compliance. A survey of 1,050 Australian teens aged 12-15 by the Molly Rose Foundation found that 61% who had accounts before the ban still have access, using VPNs, parental accounts, and spoofed biometrics. Australia's eSafety Commissioner has opened investigations into Meta, Google, TikTok, and Snap for suspected enforcement failures. Meanwhile, Turkey's parliament passed an equivalent ban on April 22, Norway announced its own bill on April 24, Manitoba announced Canada's first ban on April 26, and multiple US states are advancing similar legislation.

Governments are enacting a policy whose first real-world test has already failed, and they are calling it a success because they passed the law.

The Hidden Bet

1

Enforcement will improve as platforms face fines and regulatory pressure

Platform non-compliance is rational profit maximization. The under-16 market is valuable; the fine structure in Australia's law is $50 million AUD maximum per breach, small relative to Meta's daily revenue. Platforms will invest in enforcement theater, visible compliance measures that produce headlines without materially reducing underage access, until the penalty structure changes. The Age Verification Providers Association's own statement that the 'issue is capability not application' confirms the platforms are choosing not to enforce.

2

Age verification technology is the bottleneck and will be solved soon

The circumvention methods, VPNs, shared accounts, facial spoofing, printed masks, are available to any motivated 13-year-old for under $10. Age verification is an asymmetric arms race: every verification system requires a corresponding identity infrastructure that either creates mass surveillance or can be trivially defeated. Australia is discovering that there is no age gate that is simultaneously non-invasive, non-spoofable, and politically acceptable.

3

The policy's goal is to protect children from social media harm

Turkey's Erdogan, who is censoring political content on the same platforms, passed the ban the same week. Norway's announcement leads with 'children should be children,' not with evidence about harm mechanisms. Manitoba's announcement came on a Saturday at a fundraiser. Researchers who study the actual evidence base for social media harm find it contested and dependent heavily on which platform, which use pattern, and which demographic. The political popularity of these bans may reflect parental anxiety and electoral calculation more than evidence-based child protection policy.

The Real Disagreement

The real fork is between two models of how online harm works. The access model says: if you remove the child from the platform, the harm stops; therefore age gates are the right lever. The design model says: the harm is produced by specific algorithmic choices, recommendation loops, engagement maximization, and data collection practices that would harm any user regardless of age; age gates do nothing about those mechanisms and send the wrong signal to regulators about where the problem actually is. The access model is easier to legislate and politically popular. The design model is harder to legislate and opposed by every major platform's business model. I lean toward the design model: the evidence that platform design features cause harm is stronger and more consistent than the evidence that access per se causes harm. But I would also grant that removing the most vulnerable users from the worst environments has some direct protective value even if it does not fix the underlying problem.

What No One Is Saying

The politicians announcing these bans are not asking whether the platforms they are restricting for children are also harming adults. The algorithmic manipulation, engagement maximization, and data extraction that are framed as especially dangerous for teenagers do not become benign when the user turns 16.

Who Pays

Teenagers without resources for circumvention

Immediate and ongoing; the disparity is already visible in Australian access patterns

Wealthy or tech-savvy teens will circumvent the ban trivially. Teens from lower-income households or with less digital literacy will lose access to the social infrastructure of peer connection, not because the ban protected them but because they lack the tools to get around it. The ban is functionally a digital literacy tax.

Platform users who generate data for age verification systems

As bans are enforced more seriously; the data collection infrastructure will outlast the bans themselves

Age verification requires identity documentation or biometric data. Any government that mandates age verification creates a data infrastructure that can be repurposed for surveillance or that becomes a target for data breaches. The privacy cost of a working age verification system may exceed the harm it prevents.

Researchers and public health advocates who want evidence-based policy

Already happening as legislators cite Australia's 'success' while looking at early non-compliance data

When an unenforced ban is declared a success, it exhausts the political will for harder, more evidence-based interventions like algorithmic liability, mandatory design audits, or data collection restrictions. The ban becomes the policy and the policy does not address the mechanism of harm.

Scenarios

Enforcement theater normalizes non-compliance

Platforms pay token fines and release compliance reports describing improved age verification infrastructure. The non-compliance rate does not meaningfully change. Governments declare success and move on. The ban exists, the teens are on the platforms, and nobody has to acknowledge the contradiction.

Signal Australia's eSafety Commissioner accepts a settlement or compliance plan from Meta without requiring a measurable reduction in underage access rates

Invasive age verification creates backlash

A government mandates biometric age verification. A data breach or surveillance disclosure involving the verification system triggers a political backlash. Several countries repeal or pause their bans. The industry pivots to 'parental tools' as a substitute.

Signal Any major data breach from an age verification provider, or a government's biometric database linked to a foreign breach

Design liability becomes the alternative

Courts in the US hold Meta and Google liable for specific algorithmic features that produced documented harm to minors. The financial exposure from liability judgments exceeds the cost of platform redesign. Companies voluntarily redesign recommendation algorithms for the entire platform, not just for under-16 accounts.

Signal A jury verdict awarding over $1 billion in damages tied to a specific algorithmic feature rather than general access

What Would Change This

If six-month data shows that a country with full biometric age verification and serious platform fines achieves over 85% compliance among former underage users without a measurable increase in data breaches or surveillance incidents, the access model becomes much stronger. No country has that data yet.

Sources

Yahoo / Fortune — Survey data: Molly Rose Foundation survey of 1,050 Australians aged 12-15; 61% of pre-ban account holders still have access; methods include VPNs, shared parental accounts, printed mesh face masks from Temu to fool facial recognition; only 34% of platforms terminated pre-existing accounts
The Straits Times / Reuters — Industry angle: Age Verification Providers Association says the problem is platform application, not technological capability; regulators are investigating Meta, Google, TikTok, and Snap for suspected enforcement breaches; the companies have the tools but are not deploying them
Arkansas Digital News / AP — Global spread: Turkey's parliament passed a ban for under-15s on April 22, with Erdogan expected to sign; Australia's ban is cited as the model; the spread is happening as the first enforcement data from Australia is arriving and showing widespread non-compliance
CBC News — Canadian version: Manitoba Premier Wab Kinew announced the first Canadian ban on April 26, covering social media and AI chatbots; the announcement came without reference to Australia's enforcement data; presents this as a child protection win
The Stack Stories — Researcher critique: argues age limits are a dead end and governments should instead target algorithmic design and data collection practices; the evidence base for harm from social media access per se is contested; the policy addresses access rather than the mechanisms of harm

Related