← April 28, 2026
society decision

The Enforcement Problem No One Wants to Name

The Enforcement Problem No One Wants to Name
The Peak / Unsplash

What happened

In the same week, the UK government committed to imposing age or functionality restrictions on social media for under-16s regardless of its ongoing consultation, Norway announced plans to introduce under-16 legislation requiring tech companies to enforce age verification, and Manitoba Premier Wab Kinew announced Canada's first provincial ban on social media and AI chatbots for minors. All three announcements followed Australia's December 2025 Social Media Minimum Age law, the first of its kind globally. An Australian government report published this month found that approximately 70% of Australian children who had social media accounts before the ban still have access, and some platforms have been encouraging children to retry verification until they are approved. A 2025 hack of a UK and Australia age-verification firm exposed the government IDs of 70,000 Discord users.

Every government announcing a youth social media ban this week already knows bans do not work, because Australia ran the experiment first. The political decision to announce bans is not driven by evidence of effectiveness. It is driven by parental anxiety, tragic case visibility, and a calculation that announcing action costs less than defending inaction, even when the action fails.

The Hidden Bet

1

Age verification technology will improve enough to make bans enforceable

Age verification at scale requires either government ID upload, facial estimation, or third-party services. All three options have significant privacy and accuracy problems. Facial estimation is notoriously unreliable at distinguishing 15-year-olds from 16-year-olds. Third-party services are already getting hacked. There is no technical trajectory that makes this problem easy.

2

Banning platforms for under-16s reduces social media harms to that age group

Australia's eSafety Commissioner found no observable change in cyberbullying or image-based abuse complaints in the months following the ban. VPN downloads tripled as children circumvented it. Children who circumvent bans do so without the parental oversight the ban was designed to restore. Circumvention may produce worse outcomes than access.

3

Social media companies will enforce bans more effectively than they have historically enforced age policies

Meta, TikTok and others have had minimum age requirements for years. They have not been enforced at meaningful scale. The incentive to grow monthly active users conflicts with genuine age enforcement. Fines are small relative to revenue. Some platforms, according to Australian reports, are actively undermining their own verification systems.

The Real Disagreement

The fundamental fork is between two interventions that are not equally politically available. Evidence-based approaches to reducing social media harm in children include algorithmic curation restrictions, default privacy settings, parental control tools, and time limits, all of which already exist in some form and none of which require age verification infrastructure. Bans are politically more visible and feel more decisive, but the evidence says they do not work. The real disagreement is not about whether children should be protected from social media harm. It is about whether the political benefit of announcing a ban justifies imposing a surveillance infrastructure on all adults who must prove they are not minors to access the internet. The lean here is that the ban approach is the wrong intervention chosen for the right reason, and the people most harmed by its failure are the children it was meant to help.

What No One Is Saying

Every government that announces a social media ban is effectively outsourcing child protection to the platforms it is trying to regulate. Age verification requires trusting the companies whose design choices caused the problem in the first place to police their own user base honestly. The governments that are serious about this problem should be funding independent age-verification infrastructure, which the EU is moving toward, rather than issuing mandates and hoping platforms comply.

Who Pays

All adult internet users

When legislation takes effect; UK consultation deadline is May 26

Age verification systems that verify children are not present require adults to prove they are not children. This means uploading government identification, biometric data, or third-party verification to access social media and, in some proposals, messaging apps and gaming platforms.

Children who circumvent bans without parental knowledge

Immediately after any ban takes effect

Children who use VPNs to access banned platforms do so outside normal app stores and parental visibility. They may access more harmful content and less moderated environments than the mainstream platforms the ban targeted.

Age verification services

Ongoing, with breach risk growing as mandates spread

Centralized repositories of government IDs and biometric data for millions of users create high-value hacking targets. The 2025 breach that exposed 70,000 Discord users' government IDs is an early indicator of the systemic risk this infrastructure creates.

Scenarios

EU verification passkey becomes global standard

The EU's independent age-verification system, which works like a passkey without requiring platforms to hold ID data, proves effective in early deployment. UK, Norway, and Manitoba align their systems with it. The debate shifts from bans to verified-access infrastructure.

Signal EU announces measurable enforcement success rates above 85% in the first six months of the passkey system.

Bans fail visibly, political backlash builds

UK and Norwegian bans go into effect and produce the same 70% circumvention rate Australia found. Parliamentary scrutiny intensifies. A data breach at a UK age-verification provider exposes millions of adults' IDs and triggers a class-action lawsuit.

Signal UK eSafety Commissioner publishes a compliance report showing no material reduction in under-16 platform access six months post-implementation.

US federal action on algorithmic restrictions instead

The bipartisan US federal legislation focuses on algorithmic recommendation bans and privacy protections rather than age-based bans, following the Stars and Stripes / First Amendment analysis. The EU passkey model and the US algorithmic model diverge, creating two competing global frameworks.

Signal The US federal bill passes committee with the age-ban provision removed and algorithmic restriction provisions strengthened.

What Would Change This

If controlled studies in Australia or another jurisdiction demonstrated that bans reduced rates of depression, anxiety, or online abuse in the affected age group compared to control groups, the case for bans as effective policy would strengthen materially. The current evidence shows no such effect.

Sources

MarketScreener / Alliance News — UK government announcement: restrictions will be imposed regardless of consultation outcome. Education Minister Bailey confirmed the decision is 'how we act, not if.'
The Peak — Manitoba context: Canada's first provincial ban on social media and AI chatbots for minors, without specified age, enforcement mechanism, or timeline. Federal government also 'very seriously' considering similar measures.
The Cyber Express — Norway's proposal: under-16 limit with responsibility placed on tech companies for age verification, aligning with EU Digital Services Act trends.
Stars and Stripes — Critical analysis: age verification for children requires age verification for everyone. Australia's enforcement has failed. VPN downloads tripled. First Amendment concerns in the US context.
Deseret News — Utah enforcement model: law requires private lawsuits rather than government enforcement, and Big Tech dropped its challenge after Utah strengthened rather than weakened the law.

Related