Meta just deleted 550,000 accounts in Australia. Not because users violated community guidelines. Not because of spam or bots. Because they were too young to exist under the country’s new social media law, and Meta had no choice but to comply.

The Numbers Tell the Story
Meta began proactively enforcing the rules a week before the law took effect, removing approximately 330,000 under-16 users from Instagram, 173,000 from Facebook, and 39,000 from Threads. The company disclosed the figures in a Medium blog post one month after the ban went into effect.
Australia became the first country in the world to introduce such strict age restrictions at the legislative level. Violations carry sanctions of up to 49.5 million Australian dollars, roughly $33 million USD. For Meta, with its billions in quarterly revenue, that’s not exactly existential. But the compliance burden is real, and the company is not staying quiet about its objections.
Meta’s Compliance, With Complaints
In its blog post, Meta said it wanted to provide an update on compliance while also sharing “some of the initial impacts we have seen as a result of the law that suggest it is not meeting its objectives of increasing the safety and well-being of young Australians.”
Translation: we did what you told us to do, but we think your law is stupid.
Meta insists that age verification should be carried out at the operating system level through the App Store or Google Play, not by individual applications. The company argues that the current law creates an inconsistent patchwork where different platforms verify ages differently, and logged-out users on some platforms may still be exposed to algorithmic content anyway.
“The current law provokes a mass migration of teenagers to alternative, less controlled platforms or the use of VPNs to circumvent the ban,” Meta representatives said. The company highlighted that many Australian teens are now using platforms like Yope and Lemon8, services that may have less robust safety infrastructure than Meta’s own products.
The Global Precedent Question
What happens in Australia rarely stays in Australia when it comes to tech regulation. The country has repeatedly served as a testing ground for policies that eventually spread to other democracies. Its news media bargaining code forced Google and Facebook to pay publishers for content. Its online safety laws created frameworks that other governments have studied closely.
Now legislators in the United States, United Kingdom, and across Europe are watching how the under-16 ban plays out. If Australia can demonstrate improved mental health outcomes for young people, reduced screen time, or lower rates of cyberbullying, expect copycat legislation to proliferate. If the law creates a whack-a-mole situation where kids simply migrate to less regulated apps, critics will have ammunition for years.
The Albanese government is expected to release nationwide data on underage account removals across all platforms in the coming days. That data will matter enormously for the political narrative around whether this experiment is working.
What the Law Actually Does
The Australian legislation aims to shield minors from algorithm-driven content and other online risks. The theory is straightforward: teenage brains are still developing, social media algorithms are designed to maximize engagement rather than wellbeing, and the combination has contributed to rising rates of anxiety, depression, and body image issues among young people.
Parents have been demanding action for years. Surveys consistently show majorities of Australian parents support restrictions on social media access for children. The government decided to give them what they asked for, even knowing tech companies would push back.
But here’s where it gets complicated. Meta’s criticism isn’t entirely self-serving. The company points out that isolating teenagers from mainstream platforms doesn’t necessarily mean they stop using the internet. It may just mean they use different parts of it, parts with fewer safety features, less moderation, and less accountability. The shift toward how people interact with social media content continues to evolve regardless of platform restrictions.
The Enforcement Problem
Age verification on the internet has always been a joke, and everyone knows it. Type in a birthdate that makes you 18 and proceed. The Australian law attempts to change that calculus by putting the burden on platforms rather than users, but the fundamental problem remains: short of demanding government ID for every account creation, there’s no foolproof way to verify someone’s age online.
Meta removed accounts belonging to people it “understands to be under 16 years old.” That understanding comes from birth dates users provided when signing up, behavioral signals, and other indicators. It’s not perfect. Some legitimate adult accounts probably got swept up. Some determined teenagers almost certainly found ways to stay.
The VPN workaround is real. Any tech-savvy teenager can download a VPN app, set their location to another country, and create a new account with a fake birthdate. Meta acknowledges this is happening, according to CNBC reporting.
What Comes Next
Meta wants a policy dialogue. The company is urging Australia to rethink its approach, arguing that app-by-app age verification is inefficient and lacks industry-wide consistency. Whether the government listens depends largely on what the early data shows.
Meanwhile, TikTok and other platforms are also removing underage Australian accounts, though they haven’t been as vocal about the numbers. The industry is united in its skepticism about the law but divided in how loudly to say so.
For parents, the ban offers a legal backstop they didn’t have before. For teenagers, it’s either a frustrating obstacle or a relief, depending on their relationship with social media. For tech companies, it’s a preview of a regulatory future that may spread far beyond Australia’s borders.
The great experiment in protecting kids by kicking them off social media has begun. The results will shape technology policy for a generation. And right now, 550,000 former Instagram accounts are sitting in digital limbo, waiting to see what happens next.
