← April 30, 2026
society power

Instagram Cracks Down on Reposts the Same Week ADL Finds It Fails to Remove 93% of Hate Content

Instagram Cracks Down on Reposts the Same Week ADL Finds It Fails to Remove 93% of Hate Content
Ismail Kaplan / Anadolu via Getty Images

What happened

Instagram announced on April 30 that accounts primarily sharing content they did not create will no longer be eligible for algorithmic recommendations across the app. The change extends rules already in place for Reels to photos and carousels, targeting content aggregator accounts. The same week, the Anti-Defamation League published a study finding that Instagram failed to remove 93% of hateful and extremist content reported by ADL researchers, including accounts linked to white supremacist networks, foreign terrorist organization supporters, and Nazi merchandise vendors. The ADL's findings come 16 months after Meta CEO Mark Zuckerberg announced the company would end its fact-check program and stop using automation to detect and remove hate speech.

Meta enforces the rules that protect its creator economy and ignores the rules that would cost it engagement: the aggregator crackdown protects the creators Meta monetizes; the extremism rollback protects the outrage content that drives time-on-app.

The Hidden Bet

1

Meta's moderation decisions are made on the basis of safety and community standards

The two decisions announced this week cut in opposite directions: restrict low-harm aggregation (which competes with creators who pay Meta), leave high-harm extremism (which generates engagement Meta monetizes). The pattern suggests moderation is downstream of business model, not values.

2

The ADL's 93% failure rate reflects resource constraints that Meta could fix with more investment

Meta explicitly chose to reduce automated hate speech detection and ended its fact-check program. This is not a capacity problem; it is a policy choice made by Zuckerberg publicly in January 2025. The 93% failure rate is the designed outcome, not an operational shortfall.

3

The aggregator crackdown will meaningfully reduce low-quality content and improve creator earnings

Algorithmic enforcement of 'originality' is notoriously gameable. Low-effort aggregators will add a voiceover, change the crop, or add a caption and pass the filter. The crackdown will mostly affect unsophisticated accounts while sophisticated aggregators route around it within weeks.

The Real Disagreement

The core tension is whether platforms should be neutral infrastructure enforcing only clear legal violations, or active curators responsible for the health of public discourse. Meta under Zuckerberg's current posture is explicitly in the first camp: its rollback of automated moderation was framed as free speech. The ADL's findings are the empirical consequence of that posture. Both positions are internally coherent. The neutral infrastructure argument says Meta shouldn't be the arbiter of what counts as hate; the curation argument says 93% failure to remove white supremacist content is not a neutral outcome. The ADL's shareholder proposal, which got 46% of independent votes in 2025, suggests a significant minority of investors think the neutral infrastructure argument is also a liability argument. Which view wins depends on whether regulatory action, advertiser pressure, or user behavior shifts first.

What No One Is Saying

The aggregator crackdown and the extremism rollback are the same policy at different scales: Meta is asserting the right to determine what content thrives on its platform and what doesn't, but applying that right in ways that serve its revenue. The framing that one is 'protection of creators' and the other is 'free speech' is convenient branding for decisions that both serve the same business interest.

Who Pays

Meme and fan communities

Immediate; the policy is active

Accounts that have built audiences on shared culture, commentary and reaction content face recommendation suppression even when their activity is not extractive. The originality standard is blunt and will catch legitimate cultural commentary alongside pure theft.

Targeted communities on Instagram

Ongoing; has been ongoing since January 2025

The ADL finding covers Jewish communities as its primary focus, but white supremacist networks do not limit their targets. Black, Muslim, LGBTQ and other communities face the same 93% non-removal rate for content targeting them. The failure is systematic, not group-specific.

Meta's EU business

Medium-term; DSA enforcement ramp is underway

The EU's Digital Services Act requires platforms to take down illegal content expeditiously. A 93% failure rate on reported extremist content is potential DSA liability. The ADL's proxy filing explicitly cites EU regulatory risk and the possibility of fines up to 6% of global annual revenue.

Scenarios

Regulatory enforcement forces moderation restoration

The EU's Digital Services Act enforcement, combined with the ADL proxy campaign reaching majority shareholder support, forces Meta to restore automated hate speech detection. The company frames it as compliance, not reversal. Extremist content removal rates recover but the political relationship with the Trump administration that Zuckerberg cultivated becomes strained.

Signal EU DSA formal investigation announcement or Meta proxy vote on ADL proposal exceeds 50% of independent shareholder votes

Status quo holds

Advertisers don't flee, EU enforcement moves slowly, and the ADL proxy campaign falls short again. Meta's 'free speech' positioning continues to serve its political relationship with the US government, which provides regulatory cover. The 93% failure rate persists as the operational baseline.

Signal No major advertiser boycott materializes in Q2, and Meta's Q2 ad revenue continues growing above 15% year-over-year

Aggregator crackdown triggers creator backlash

The originality enforcement is gamed by sophisticated aggregators and aggressively applied to legitimate meme and fan accounts. A high-profile false positive, a major fan account suppressed, generates enough creator anger that Meta quietly softens the enforcement standard.

Signal A creator with over 1 million followers publicly documents recommendation suppression despite creating original content

What Would Change This

If a major advertiser pauses Instagram buys specifically citing the ADL report, the calculus changes: Meta moved toward safety when advertiser pressure was the mechanism in 2020, and it will again. Without advertiser pressure or regulatory enforcement, the current equilibrium holds.

Sources

TechCrunch — Instagram is expanding its anti-aggregation rules to photos and carousels; accounts that primarily repost others' content will no longer be eligible for recommendations
Pittsburgh Jewish Chronicle / JTA — ADL researchers reported 93% of hateful and extremist content on Instagram goes unremoved, calling it a 'systemic failure' after Meta rolled back its fact-check program and hate speech automation over a year ago
The Verge — The aggregator crackdown affects meme accounts, fan accounts and any account whose primary activity is resharing. Meme culture and commentary accounts are explicitly named as facing uncertainty under the new rules.
Engadget — Technical detail: low-effort edits like watermarks or speed changes don't count as 'original.' The rule is algorithm-based, not human-moderated, meaning it will be gamed and will produce false positives.

Related