The Social Media Ban Experiment Is Failing in Real Time
What happened
Australia enacted the world's first social media ban for children under 16 in December 2025. A survey of 1,050 Australian teens conducted by the Molly Rose Foundation found that more than 60% of teens who had accounts before the ban continue accessing at least one platform, with TikTok, YouTube, and Instagram retaining over half their under-16 user base. Workarounds include using parents' facial recognition credentials, VPNs, and mesh face masks to defeat age verification. In the UK, the government tabled an amendment allowing ministers to wait up to three years before introducing restrictions, triggering a Lords revolt led by Conservative peer Lord Nash; the Lords vote today on a competing amendment that would force action within 12 months. Manitoba Premier Wab Kinew announced plans to ban social media for under-16s despite the Australian evidence, following Australia's example. New Mexico juries recently held YouTube and Meta liable for addictive design features harming minors.
Governments around the world are copying Australia's ban while Australia's own data shows the ban does not work. The political calculation is to do something visible regardless of whether that something is effective.
The Hidden Bet
The 60% noncompliance rate means the ban is failing
40% reduction in access may still represent a meaningful harm-reduction outcome even if not the clean result policymakers claimed. The question is not whether the ban is perfect but whether 40% fewer teen users translates into measurable mental health improvements in a population-level study two or three years from now. The failure framing may be premature.
Age verification technology is the bottleneck
The bottleneck is enforcement will. Platforms have not removed pre-existing accounts, and Australia's internet regulator only recently began investigating them for violations. The ban's ineffectiveness is partly a political choice: governments impose bans they are not prepared to enforce, because enforcement would require confronting platforms on which they are politically dependent.
Social media causes teen mental health problems
The causal link is contested in the research literature. Correlations between heavy social media use and anxiety or depression exist, but the direction of causation is unclear. Teens who are already struggling may seek out social media more, rather than social media causing the struggle. Bans targeting the platform rather than the underlying isolation or mental health infrastructure may miss the actual cause.
The Real Disagreement
The real fork is not whether to ban social media for teens. It is whether government has the regulatory capacity to enforce what it bans, and whether the ban actually addresses the harm. One side: even imperfect enforcement sends a norm signal that changes behavior at the margins, and the potential upside of protecting some fraction of children from genuinely addictive and harmful design justifies the cost. Other side: bans drive behaviour underground, remove parental visibility, create a two-tier system by economic class (richer kids get better VPNs), and substitute the appearance of action for the harder work of improving digital literacy, mental health services, and platform accountability. The Australian data supports the second position more than the first, but the political incentives strongly favor bans because they are visible and easy to announce. I lean toward targeted liability over blunt age-gates: the New Mexico verdict against Meta and YouTube for addictive design is more likely to change platform behavior than any ban.
What No One Is Saying
Platform executives know their products are addictive to some teens. Their internal research has shown this for years. They are not saying so publicly because it creates liability. Governments are not saying they lack enforcement capacity because it looks weak. Parents are not saying they cannot monitor their children's online activity effectively because it feels like failure. Everyone has a reason to support bans they privately suspect will not work.
Who Pays
Teens from lower-income families
Immediate upon enforcement
Enforcement of age-gating through expensive identity verification creates a two-tier internet. Kids with tech-savvy parents or access to paid VPN services circumvent bans easily; kids without those resources may actually lose access, including to beneficial uses like sports team coordination, job searching, and connecting with distant family.
Meta, TikTok, YouTube
Q3-Q4 2026 as enforcement catches up
Legal liability exposure is expanding. The New Mexico verdict establishing liability for addictive design is the larger threat. If Australia's enforcement investigation finds platforms in violation, fines under the ban framework could be substantial. The global wave of copycat legislation creates a patchwork of compliance requirements.
Scenarios
Lords back Nash, UK acts within 12 months
Today's vote passes Nash's amendment, Parliament enshrines a 12-month deadline before prorogation, and the UK government is forced to implement age restrictions for under-16s by early 2027. Australia's enforcement experience is used as a negative model; UK adds stronger platform liability provisions.
Signal Lords vote today passes Nash's amendment by a margin similar to the previous three votes
Governments quietly abandon enforcement
Australia's enforcement investigation produces modest fines but no meaningful platform change. The compliance rate stays around 40%. Governments keep the ban on paper to satisfy parents but do not escalate penalties. The ban becomes symbolic legislation rather than operational policy.
Signal Australia's internet regulator closes its investigation without substantial platform penalties by Q3 2026
Liability model replaces ban model
New Mexico-style verdicts multiply across US states. European courts apply similar logic under the Digital Services Act. Platforms face cumulative liability exposure that forces redesign of recommendation algorithms for minors, parental controls, and screen time features, without the enforcement problems of outright bans.
Signal More than three US states pass legislation creating a private right of action for minors harmed by platform design within 2026
What Would Change This
If longitudinal studies in Australia show significant mental health improvements in the 40% of teens who actually stopped using social media as a result of the ban, the bottom line shifts: even partial enforcement generates real benefit. The current data is too early and too compliance-focused to answer that question.