Turkey Bans Social Media for Under-15s. The Global Wave Just Got Bigger.
What happened
Turkey's parliament passed legislation on April 22 banning social media use by children under 15. The law requires platforms to implement age verification, provide parental control tools, respond more quickly to harmful content, and appoint a local representative in Turkey. Game software companies are also covered and must classify games by age. The law came one week after a 14-year-old student killed nine classmates and a teacher at a school in Kahramanmaras; police are examining the shooter's online activity. President Erdogan, who must sign the bill within 15 days, publicly called social media platforms 'cesspools.' Turkey's main opposition, the Republican People's Party, criticized the ban as rights-violating. Turkey joins Australia, Indonesia, Malaysia, and Greece in passing similar restrictions, with the UK and France signaling similar intentions.
The school shooting gave Erdogan a politically unassailable justification to pass a law he wanted for different reasons, and the same dynamic is playing out in every country where child safety has become the crowbar that opens the door to platform regulation and the surveillance infrastructure it requires.
The Hidden Bet
Age verification protects children from social media harm
Age verification has never worked at scale. Australia passed its law in December 2025 and is already documenting VPN workarounds by the cohort it was designed to protect. The teenagers who are most at risk from social media harm are also the most technically capable of circumventing age gates. The law changes who is nominally on a platform, not who is actually on it.
This is primarily a child welfare policy
Turkey has spent a decade using legal mechanisms to restrict social media access for political reasons: demanding local representatives who can be pressured to remove content, imposing bandwidth restrictions, and blocking platforms that don't comply. The new law's requirement for local representatives and rapid content removal applies to every user, not just children. The child welfare framing is politically bulletproof and provides the institutional infrastructure that broader censorship has always required.
Western democracies passing similar laws face the same tradeoffs as Turkey
The UK and France have stronger judicial oversight, independent regulators, and different relationships between state power and platform compliance than Turkey does. But the surveillance infrastructure required for age verification, specifically the collection of identity documents and biometric data to authenticate users, is identical regardless of the government asking for it. The risk is not that the UK becomes Turkey, but that the infrastructure built for child safety can be repurposed by any future government.
The Real Disagreement
The fork is between two positions that both have genuine support among people who care about children: unlimited platform access at any age causes demonstrable harm, including increased rates of depression, anxiety, and self-harm in adolescents, and the parental consent model is failing because parents cannot practically monitor children's digital lives. The opposing view is that prohibition and surveillance create children who are either fully excluded from the digital literacy development their peers are gaining, or who have learned to operate underground in ways that are less visible and potentially more dangerous. The harm data favors the first position for very young children. For 14-year-olds, the evidence is more contested, and the Kahramanmaras shooter was 14. What you give up by accepting the ban is the ability to teach children to navigate harmful content rather than hiding it.
What No One Is Saying
The Kahramanmaras shooter killed because he had access to a firearm, not because he had access to TikTok. Turkey's parliament passed a social media law and not a gun law. The online activity investigation is ongoing, which means the law was passed before anyone established what role, if any, online content played in the attack. The speed of the legislative response suggests the shooting was an occasion rather than a cause.
Who Pays
Turkish teenagers between 13 and 15
Immediately upon law taking effect
They are excluded from platforms where their peers organize socially and access information, with no practical mechanism for age-appropriate access to digital social life; the more technically capable will circumvent the ban and face no consequences, while the less technically capable will be genuinely excluded
Turkish political dissidents and journalists who rely on younger users amplifying content
Ongoing, as enforcement ramps up in the 12 months following the law's enactment
The local representative requirement gives the Turkish government direct leverage to demand content removal under penalty of bandwidth throttling; the child safety framing makes it politically impossible to resist compliance without appearing to endanger children
Global users of age verification-mandated platforms
Slow-burn, years out, but permanent
Every country that requires platforms to build age verification infrastructure creates databases of identity documents linked to social media activity that are targets for data breaches and that can be subpoenaed or hacked. The risk is not Turkish; it is wherever the data is stored and whoever the next government is.
Scenarios
Compliance Theater, Circumvention Normalized
Platforms implement nominal age verification, Turkish teenagers use VPNs and parents' accounts at rates comparable to Australia, the government claims success, enforcement focuses on platforms that don't cooperate rather than actual underage users. The ban becomes a compliance cost rather than an effective restriction.
Signal VPN app downloads in Turkey spike 300% or more within 60 days of enforcement beginning, as documented in app store data
Law Weaponized for Content Removal
The local representative requirement enables a series of takedown demands framed as protecting children but targeting political content. Platforms that comply face accusations of censorship; platforms that resist face bandwidth throttling. The child safety law becomes the de facto content moderation law.
Signal A documented request from Turkish authorities to remove political content citing the child safety law's rapid-response requirement
Western Cascade
UK legislation passes within six months, citing the global wave and citing Turkey's law as a model. France follows. The EU begins considering a harmonized standard. The age verification surveillance infrastructure becomes a global norm rather than an outlier.
Signal UK parliament passes a vote on second reading of an age verification bill before the end of 2026
What Would Change This
If the Kahramanmaras investigation concludes that online content had no documented role in the attack, the entire rationale for the law collapses in legal terms, though politically it may not matter. The analysis would also shift if age verification technology advances to the point where it can be implemented without creating identity databases, which does not currently exist.
Related
The Kids App Ban That Big Tech Will Actually Win
powerThe Social Media Ban That Already Failed
powerThe EU Just Built the Infrastructure to Ban Children from Social Media. The Harder Question Is Who Decides What a Child Is.
decisionThe World Is Copying Australia's Teen Social Media Ban. Australia's Is Already Failing.