The EU Charged Meta With Letting Children Onto Instagram. Meta Said It Uses Self-Declared Birthdays.
What happened
European Union regulators charged Meta on April 29 with failing to keep children off Instagram and Facebook in violation of the Digital Services Act, passed in 2022 to force platforms to police themselves more aggressively. The specific allegation is that Meta's age verification system relies on users entering their own date of birth without any effective mechanism to check whether the date is accurate. The EU said Meta appears to be in violation and opened formal proceedings. Meta did not dispute the factual basis of the complaint, instead stating it had 'a zero-tolerance policy' for underage use and had 'strengthened safeguards.'
Meta built its entire youth engagement strategy on the fiction that telling people not to lie about their age is sufficient age verification. The EU just said that fiction is now illegal.
The Hidden Bet
Effective age verification is technically feasible at scale
Every proposed alternative to self-declaration has significant costs: ID verification requires collecting government documents, which creates privacy risks and excludes users without IDs; device-based checks can be circumvented; parental consent systems can be faked by children or gamed by inattentive parents. The EU is demanding an outcome without specifying a method that doesn't create new harms.
Meta will comply with the spirit of any order rather than its letter
Meta has a documented history of compliance theater: implementing the minimum required by law while designing interfaces that maximize engagement with teenage users. The company's revenue model depends on young users. Any age verification system that actually works would reduce its user base, which it has no internal incentive to implement.
This enforcement action will meaningfully change outcomes for children
Children who want to be on Instagram already know how to use their parent's email address, lie about their birthdate, and navigate around restrictions. Age verification reduces casual underage access, not determined underage access. The marginal child being protected by stronger verification is not the child being harmed by Instagram's recommendation algorithms.
The Real Disagreement
The real disagreement is not about age verification mechanics. It is about whether governments should regulate social media platforms like infrastructure, with mandatory safety standards, or like publishers, where editorial discretion governs what appears. The DSA tries to split the difference by requiring platforms to police their own rules. The problem is that the rules Meta is violating are Meta's own terms of service, which exist partly to satisfy regulators and partly to provide legal cover. Enforcing Meta's stated rules against Meta is a more achievable goal than making social media safe for children, but it is not the same thing as making social media safe for children.
What No One Is Saying
Instagram's teen engagement metrics are among the highest in its history. If Meta's current age verification is effectively zero, and teens are heavily on the platform, then stronger verification will reduce revenue. The company has every financial incentive to continue claiming compliance while making the minimum possible change. The EU knows this. The question is whether it is willing to impose fines large enough to change the math.
Who Pays
Teenagers who are actually harmed by Instagram
Depends entirely on what Meta is actually required to do
Stronger age verification, if it works, removes access but does not address the algorithmic systems that drive the documented harms (eating disorder content, self-harm promotion, social comparison). The DSA focuses on access, not content mechanics
Meta shareholders
Fines upon formal finding of violation, likely 12-24 months; user impact depends on remediation timeline
Potential fines under DSA can reach 6% of global annual revenue; for Meta that would be approximately $7 billion; more practically, effective age restrictions would reduce the teen user base that drives long-term engagement
Smaller European platforms
Ongoing compliance burden from 2026 forward
Compliance costs for age verification systems fall proportionally harder on smaller companies; large platforms can build the infrastructure, small competitors cannot
Scenarios
Compliance Theater
Meta introduces a more visible age verification prompt that still relies on self-declaration but adds a checkbox confirming the user is over 13. EU accepts this as initial compliance. The actual behavior of underage users on the platform does not change.
Signal Meta press release announcing 'enhanced age verification' within 90 days with no third-party audit component
Hard Enforcement
EU issues a formal finding, imposes a substantial fine, and mandates a specific technical standard (such as device-level age estimation or third-party ID verification) with a deadline. Meta lobbies aggressively, loses, and implements the required system. Teen users migrate to TikTok and other platforms outside EU jurisdiction.
Signal European Commission proceeding to a formal decision rather than accepting Meta's remediation plan
Negotiated Standard
Meta and the EU co-design a standards-based approach that satisfies the letter of the DSA while preserving most existing functionality. Other major platforms adopt the same standard. The EU claims victory. Academic researchers later find that underage access continued at 70-80% of prior levels.
Signal Meta inviting EU officials to participate in a 'collaborative working group' on age verification design
What Would Change This
If a technical standard for age verification existed that was both effective and privacy-preserving, the regulatory argument becomes much cleaner. Currently no such standard exists. If the EU specifies a requirement that Meta cannot meet without genuine harm to its user base, the pressure is real. If the requirement can be satisfied with a checkbox, the bottom line is wrong.
Related
Norway, Indonesia, and Australia Are Banning Social Media for Under-16s. Meta Is Threatening to Leave States That Try It.
powerNew Mexico Wants to Redesign Instagram. Meta Wants to Leave the State Instead.
powerMeta Threatens to Shut Down in New Mexico Rather Than Protect Kids
powerThe Social Media Ban That Already Failed