Australia’s online watchdog has criticised the world’s biggest social platforms of failing to properly enforce the country’s prohibition preventing under-16s from accessing their platforms, despite laws that took effect in December. The eSafety Commissioner, Julie Inman Grant, has expressed “significant concerns” about adherence by Facebook, Instagram, Snapchat, TikTok and YouTube, highlighting inadequate practices including permitting prohibited users to make repeated attempts at age verification and insufficient measures to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator identified multiple shortcomings and has now moved from monitoring to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Non-compliance Issues Exposed in Opening Large-scale Review
Australia’s eSafety Commissioner has detailed a concerning pattern of non-compliance amongst the world’s largest social media platforms in her first formal review since the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish adequate safeguards to prevent minors from using their services. Julie Inman Grant raised significant concerns about structural gaps in age verification processes, highlighting that some platforms have allowed children who originally stated themselves under 16 to subsequently claim they were older, effectively circumventing the law’s intent.
The findings represent a significant escalation in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has emphasised that simply showing some children still maintain accounts is inadequate; platforms must instead provide concrete evidence that they have established robust systems and processes designed to prevent under-16s from creating accounts in the outset. This shift signals the government’s commitment to ensure tech giants responsible, with potential penalties looming for companies that do not meet the legal requirements.
- Allowing previously banned users to confirm again their age and restore account access
- Enabling repeated attempts at the identical verification process with no repercussions
- Insufficient systems to prevent new under-16 accounts from being opened
- Inadequate complaint mechanisms for parents and the general public
- Lack of clear information about compliance actions and account removals
The Extent of the Problem
The considerable scale of social media activity amongst young Australians highlights the regulatory challenge facing both the government and the platforms in question. With millions of accounts already restricted or removed since the implementation of the ban, the figures provide evidence of widespread initial non-compliance. The eSafety Commissioner’s conclusions indicate that the operational and technical barriers to implementing age restrictions have proven far more complex than expected, with platforms having difficulty to differentiate authentic age confirmations from false claims. This complexity has placed enforcement authorities wrestling with the fundamental question of whether existing age verification systems are adequate to the task.
Beyond the technical obstacles lies a broader concern about the willingness of platforms to prioritise compliance over user growth. Social media companies have consistently opposed strict identity verification requirements, citing privacy concerns and the real challenge of confirming age online. However, the Commissioner’s report suggests that some platforms might not be demonstrating adequate commitment to implement the systems required by law. The move to active enforcement represents a pivotal moment: either platforms will significantly enhance their compliance infrastructure, or they risk facing substantial fines that could transform their operations in Australia and potentially influence regulatory approaches internationally.
What the Data Shows
In the initial month following the ban’s launch, Australian authorities stated that 4.7 million accounts had been suspended or deleted. Whilst this statistic initially appeared to demonstrate regulatory success, later review reveals a more nuanced picture. The considerable quantity of account removals suggests that many under-16s had been able to set up accounts in the first place, revealing that protective safeguards were insufficient. Moreover, the data prompts inquiry about whether suspended accounts reflect real regulation or just users deleting their pages willingly in reaction to the latest limitations.
The limited transparency surrounding these figures has disappointed independent observers seeking to assess the ban’s genuine effectiveness. Platforms have provided minimal information about their implementation approaches, performance indicators, or the characteristics of removed accounts. This lack of clarity makes it hard for regulators and the public to determine whether the ban is operating as planned or whether young people are simply finding other methods to use social media. The Commissioner’s push for detailed evidence of systematic compliance measures reflects mounting dissatisfaction with platforms’ unwillingness to share comprehensive data.
Sector Reaction and Pushback
The social media giants have responded to the regulatory enforcement measures with a mixture of assurances of compliance and scepticism about the practical feasibility of the ban. Meta, which operates Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst at the same time contending that precise age verification remains a major challenge across the industry. The company has advocated for a alternative strategy, suggesting that robust age verification and parental approval mechanisms put in place at the app store level would be more efficient than platform-level enforcement. This stance demonstrates wider concerns across the industry that the current regulatory framework places an impractical burden on individual platforms.
Snap, the developer of Snapchat, has adopted a more assertive public position, announcing that it had locked 450,000 accounts since the ban took effect and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or merely reactive account management. The core conflict between platforms’ commercial structures—which historically relied on maximising user engagement and growth—and the statutory obligation to actively exclude an whole age group persists unaddressed. Companies have consistently opposed stringent age verification, pointing to privacy concerns and technical limitations, establishing an impasse between regulators and platforms over who bears responsibility for execution.
- Meta contends age verification should occur at app store level rather than on individual platforms
- Snap states to have locked 450,000 user accounts following the ban’s implementation in December
- Industry groups point to privacy concerns and technical challenges as impediments to effective age verification
- Platforms contend they are making their best effort whilst questioning the ban’s general effectiveness
More Extensive Inquiries About the Ban’s Efficacy
As Australia’s under-16 social media ban moves into its enforcement phase, key concerns persist about whether the law will accomplish its intended goals or merely push young users towards unregulated platforms. The regulatory authority’s first compliance report reveals that despite months of implementation, substantial gaps remain—children keep discovering ways to circumvent age verification systems, and platforms have had difficulty prevent new underage accounts from being established. Critics argue that the ban’s effectiveness depends not merely on regulatory vigilance but on whether young people will truly leave mainstream platforms or simply shift towards alternative services, secure messaging apps, or VPNs designed to conceal their age and location.
The ban’s worldwide effects add another layer of complexity to assessments of its impact. Countries such as the United Kingdom, Canada, and several European nations are watching Australia’s initiative closely, considering similar legislation for their own citizens. If the ban does not successfully reduce children’s online activity or fails to protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if implementation proves sufficiently strict to genuinely restrict underage access, it may embolden other nations to adopt comparable measures. The conclusion will likely influence worldwide regulatory patterns for many years ahead, making Australia’s implementation efforts analysed far beyond its borders.
Who Benefits and Those Who Suffer
Mental health campaigners and organisations focused on child safety have backed the ban as a necessary intervention against algorithmic manipulation and exposure to harmful content. Parents and educators contend that removing young Australians platforms designed to maximise engagement could lower anxiety levels, enhance sleep quality, and decrease exposure to cyberbullying. Tech companies’ own research has recognised the risks to mental health linked to social media use amongst adolescents, adding weight to these concerns. However, the ban also eliminates valid applications of social media for young people—keeping friendships alive, accessing educational content, and participating in online communities around common interests. The regulatory framework assumes harm outweighs benefit, a calculation that some young people and their families question.
The ban’s concrete implications extends beyond individual users to impact content creators, small businesses, and community organisations dependent on social media platforms. Young people who might have taken up creative careers through platforms like TikTok or Instagram now confront legal barriers to participation. Small Australian businesses that rely on social media marketing no longer reach younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously employed effectively. Meanwhile, the ban unintentionally favours large technology companies with resources to develop age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects extend far beyond the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has indicated a notable transition from passive monitoring to active enforcement, marking a critical turning point in the implementation of the under-16 ban. The authority will now gather evidence to determine whether companies have omitted “reasonable steps” to prevent underage access, a legal standard that surpasses simply recording that children remain on these systems. This method demands tangible verification that organisations have implemented proper safeguards and protocols designed to exclude minors. The Commissioner’s office has signalled it will launch probes carefully, building cases that could lead to considerable sanctions for failure to comply. This move from observation to enforcement reflects growing frustration with the companies’ present approach and signals that consensual engagement alone will no longer suffice.
The rollout phase highlights important questions about the appropriateness of fines and the concrete procedures for ensuring platform accountability. Australia’s statutory provisions delivers enforcement instruments, but their efficacy depends on the eSafety Commissioner’s commitment to initiate official proceedings and the platforms’ ability to adapt meaningfully. Global regulators, particularly regulators in the Britain and Europe, will carefully track Australia’s regulatory approach and outcomes. A effective regulatory push could establish a model for additional countries evaluating equivalent prohibitions, whilst failure might compromise the entire regulatory framework. The coming months will be critical whether Australia’s innovative statutory framework translates into real safeguards for young people or becomes largely performative in its impact.
