The EU Just Built the Infrastructure to Ban Children from Social Media. The Harder Question Is Who Decides What a Child Is.
What happened
European Commission President Ursula von der Leyen announced on April 15 that the EU's age verification application is now technically ready for deployment. The app allows users to verify their age using a passport or national ID card, generating anonymous age tokens that can be shared with online platforms without transmitting the underlying identity documents. Multiple EU member states had been independently building conflicting national systems; the Commission's app is designed to prevent fragmentation into incompatible rules. Separately, the Canadian government said it is 'very seriously' considering minimum age requirements not only for social media but also for AI chatbots, citing comparable developmental risks to children.
The EU has solved the technical problem of age verification; it has not touched the political problem, which is that the same app that locks children out of TikTok can also lock them out of any content a future government decides is age-inappropriate.
The Hidden Bet
The age verification app is a child protection tool and nothing more
A functioning state-issued digital identity infrastructure that gates access to online content is also, structurally, a censorship infrastructure. The question is not whether the app will be abused today but whether it creates the conditions for abuse by a future government. Hungary vetoes EU sanctions. Poland's rule-of-law crisis ran for years. The member states building this system are not all the same as the member states using it in five years.
Platforms cannot route around age verification at acceptable cost
The EU app requires platforms to integrate its API. Platforms operating from outside the EU, or using non-EU servers, face limited enforcement. The history of platform compliance with the Digital Services Act suggests companies will comply minimally in ways that satisfy regulators without actually blocking access. Age verification in practice may work on teenagers who respect the rules and fail on the teenagers the legislation is designed to protect.
Canada extending the logic to AI chatbots is a natural extension of child safety policy
AI chatbots and social media are structurally different: social media is a networked identity space where peer pressure and exposure to peers are the mechanism of harm. AI chatbots are conversational tools where the harm model is about content and dependency, not social identity. Treating them identically produces a policy that looks consistent but governs two different problems with the same instrument.
The Real Disagreement
The real fork is not between 'protect children' and 'protect free speech.' Both sides agree children should be protected and adults should be free. The actual disagreement is about what the default should be: should online spaces be open by default with parental override, or restricted by default with verified age override? The current EU approach is moving toward restriction by default, which means the state decides what is age-appropriate and the burden of proof falls on the person trying to access content. That is a fundamentally different relationship between citizen and state than the internet was built on. I lean toward the parental-override model rather than state-default restriction, but what you give up is the political cleanness of a uniform rule: parental override requires parents to engage, which protects kids whose parents engage and fails kids whose parents don't.
What No One Is Saying
The platforms that built the mental health crisis in the first place are now being handed a regulatory framework that makes them the primary enforcement mechanism for state-issued age verification. The app von der Leyen announced requires Meta, TikTok, and Google to accept EU-issued age tokens as gatekeeping credentials. This gives the platforms a new role as licensed verifiers of state identity decisions, which is a form of privatized border control that did not exist before. The platforms are not objecting loudly because this is a regulatory burden that large incumbents can absorb and small competitors cannot.
Who Pays
Teenagers in countries with poor digital infrastructure
Immediate upon enforcement rollout
Age verification requiring a passport or national ID presupposes that every child has one. In EU member states with lower administrative capacity, undocumented children or children without valid ID will be locked out of the legal internet and driven toward unregulated alternatives.
Small social platforms and niche online communities
6-12 months after mandatory enforcement begins
Integration with the EU age verification API has implementation and compliance costs that large platforms can absorb but that will close off the long tail of smaller communities. The regulation will consolidate traffic toward the incumbents it was designed to constrain.
Privacy advocates and civil liberties organizations
Slow-burn over years, as the infrastructure matures and scope creep begins
Even if the current system is designed with privacy safeguards, establishing the precedent of state-issued online identity tokens changes the legal baseline for every future argument about surveillance infrastructure. The 'anonymous' token design can be reversed by legislative amendment.
Scenarios
Harmonized Compliance
Major platforms integrate the EU app; most EU member states adopt it as their national standard; enforcement is patchy but the system becomes the assumed baseline for online age policy globally. Canada and Australia adopt compatible frameworks. The US continues to litigate state-level age laws without federal coordination.
Signal Meta and TikTok both announce integration timelines within 90 days of the app's official launch date.
Fragmented Failure
Several member states refuse to adopt the EU app and maintain their own incompatible systems; platforms negotiate separate compliance regimes with each country; teenagers route around enforcement via VPNs and age-misrepresentation. The regulation produces compliance theater but not actual protection.
Signal France or Germany announces a parallel national system within six months of the EU launch.
AI Extension Backfires
Canada moves forward on AI chatbot age restrictions; other jurisdictions follow; age verification becomes required for GPT-type interfaces. This creates a two-tier AI ecosystem: verified adults with full access and minors with sanitized AI responses. The educational use case for AI collapses as teachers cannot deploy tools that require student identity verification.
Signal A major educational technology provider publicly withdraws from Canada or an EU market citing AI age verification compliance costs.
What Would Change This
If an independent technical audit of the EU app shows that the anonymous token system can actually be de-anonymized through metadata correlation, the privacy-preserving case for the infrastructure collapses and the civil liberties objections become the dominant narrative.