Five Countries, One Law They Cannot Write
What happened
On April 15, Canada's Culture Minister announced the government is 'very seriously' considering banning social media access for children under 16, days after Liberal party convention members passed a non-binding resolution calling for the ban. Simultaneously, Australia's world-first legislative ban on under-16 social media access is encountering significant practical and legal difficulties as it moves toward implementation. The EU Commission is deploying an age verification app and seeking to prevent a fragmented landscape of national bans across its 27 member states. In the US, Massachusetts Governor Maura Healey proposed state-level restrictions on youth social media use, adding to similar actions in Florida, Utah, and other states. Multiple governments are reaching the same legislative destination through different political routes, with no clear answer to the question of how any of them would actually verify a child's age without invasive surveillance of all users.
Every government promising to ban social media for minors is actually promising to build a national age verification infrastructure that will also monitor every adult who uses those platforms, and they are hoping no one notices that the two things are the same.
The Hidden Bet
Age verification is a technical problem with a technical solution
Every proposal to verify that a user is over 16 requires collecting identity information about every user, including adults. Governments and platforms have not explained how they will prevent that data from being used for other purposes, breached, or required by law enforcement. The EU's age verification app is being positioned as a privacy-preserving solution, but the architecture required to verify age at scale is incompatible with anonymous internet access. The privacy cost is being deferred, not avoided.
Banning minors from social media will reduce harm to children
The evidence that social media causes mental health harm to minors is contested among researchers. Australia's legislative push began after years of advocacy claiming clear causal links, but systematic reviews have found correlational evidence is much weaker than reported. A ban that imposes large surveillance costs on all users to address a harm that may not be caused by the mechanism being targeted is a policy that serves the political need to act more than the empirical need to reduce harm.
Platforms will comply seriously
Meta, TikTok, and Snapchat have lobbied intensively against every version of these laws. When laws pass, platforms implement compliance that satisfies the letter while undermining the spirit: they verify ages in ways that are trivially bypassed by any teenager who knows their parents' birth year. The enforcement burden falls on regulators, not platforms, and regulators have not demonstrated the capacity to audit platform-level age verification at scale.
The Real Disagreement
The real fork is whether children's online safety requires restricting minors' access or restricting the platforms' design choices that harm everyone, including adults. Banning under-16s from social media leaves adults exposed to the same algorithmic manipulation that drives engagement at the cost of wellbeing, the same recommendation systems that amplify outrage, and the same data collection that funds the business model. Restricting those design choices would require confronting the platforms' revenue model directly. Restricting minors' access lets governments look decisive while leaving the underlying system intact. Both positions cost something: the minor-ban path builds surveillance infrastructure. The design-restriction path confronts extremely powerful lobbying interests and requires remaking the business model of trillion-dollar companies. Most governments are choosing the minor-ban path because the surveillance cost is invisible at legislation time and the lobby fight is immediate. That is the honest description of what is happening, and I lean toward design restriction as the more honest policy, though it is politically much harder.
What No One Is Saying
Every government proposing a social media ban for minors is also implicitly proposing that everyone else prove their age to use the same platforms. No minister is saying that out loud. They are all describing the system as a check on children. It is a registry of adults who use social media.
Who Pays
Teenagers who use social media to access peer support, LGBTQ+ community, or news
Immediately upon implementation
A ban removes access to social connection that, for some adolescents, is the primary source of community identity unavailable in their physical environment. The harms of social media are not evenly distributed, and neither are the harms of exclusion.
Adults in countries with national age verification systems
Medium-term; 3-10 years post-implementation
Once an age verification database exists, it becomes a target for hacks, a tool for law enforcement requests, and a mechanism for future expansion. The privacy cost is not paid at the time of legislation; it is paid in the decade after, when the infrastructure is repurposed.
Smaller social platforms and new entrants
From the day the law takes effect
Age verification compliance costs are a fixed overhead that large platforms can absorb and small competitors cannot. Every government mandate on age verification is also a competitive moat for Meta, TikTok, and Snapchat.
Scenarios
Patchwork enforcement
Laws pass in Canada, Australia, and multiple US states. Platforms implement nominal age gates. Teenagers route around them in days. Governments declare partial success, pass minor amendments, and move on. No fundamental change to platform behavior.
Signal Six months post-implementation, no platform faces a significant fine for age verification failure
EU sets the standard
EU's age verification app becomes the de facto global standard because platforms need to comply with it to access European markets. Australia and Canada adopt compatible frameworks. The surveillance infrastructure is technically EU-controlled but practically global.
Signal A major US platform announces EU age verification app integration as its global compliance solution
Legal challenge succeeds
Australia or a US state law is struck down on First Amendment or equivalent grounds. Courts rule that banning minors from social media is a content restriction that cannot survive constitutional scrutiny. Legislative momentum stalls for 3-5 years.
Signal A federal court grants a preliminary injunction against a US state social media minor ban within 60 days of enactment
What Would Change This
If a peer-reviewed meta-analysis publishes clear evidence that reduced social media use improves mental health outcomes for adolescents under controlled conditions, the policy premise is validated and the design-restriction argument loses ground. If Australia's implementation produces measurable reductions in teen mental health crisis rates with no demonstrable surveillance harms, the case for bans becomes much stronger.