Australia considers banning children from social media to address mental health concerns, but questions arise about enforcement and effectiveness.
“I was genuinely terrified,” James admits, recounting an incident on Snapchat that made him doubt the safety of going to school.
The 12-year-old Australian boy had a disagreement with a friend and, one night before bed, added him to a group chat that included two older teenagers.
Almost instantly, his phone began receiving a barrage of violent messages.
“One of them sounded like he was probably 17,” James shares with the BBC. “He sent me videos showing himself brandishing a machete… waving it around menacingly. Then there were voice messages threatening to find and stab me.”
James, which is a pseudonym, first signed up for Snapchat at the age of 10 following a suggestion from a classmate that all their friends should get the app. However, after discussing his cyberbullying experience with his parents—a situation eventually resolved by school intervention—James decided to delete his account.
According to his mother Emma, who is also using a pseudonym, her son’s experience serves as a cautionary tale that highlights the necessity of the Australian government’s proposed social media ban for children under 16.
The legislation, introduced in the lower house of parliament on Thursday, has been described by Prime Minister Anthony Albanese as “world-leading.”
While many parents have praised this decision, some experts are questioning whether it is feasible or appropriate to prevent children from accessing social media and what potential negative effects might arise from such restrictions.
What proposal is Australia putting forward?
Albanese states that the prohibition, applicable to platforms like X, TikTok, Facebook, and Instagram, is aimed at safeguarding children from the “harms” associated with social media.
He stated on Thursday, “This is an international issue, and our aim is for young Australians to truly experience a childhood. We want parents to feel at ease.”
The new legislation establishes a “framework” for the ban; however, the 17-page document lacks detail and is anticipated to reach the Senate next week.
Instead, the responsibility for developing and enforcing these rules will fall to the nation’s internet regulator, the eSafety Commissioner. The implementation of these regulations will not occur until at least 12 months after the legislation is passed.
The bill states that the ban will affect all children under 16, with no exceptions for existing users or those who have parental consent.
Tech companies may incur penalties of up to A$50 million (approximately $32.5 million or £25.7 million) if they fail to comply; however, exceptions will be made for platforms that can develop “low-risk services” considered appropriate for children. The criteria defining this threshold have not yet been established.
However, messaging services and gaming sites will not face restrictions. Additionally, some platforms that can be accessed without an account, such as YouTube, are also exempt from these limitations. This situation has led to inquiries about how regulators intend to distinguish between what constitutes a social media platform in the rapidly evolving digital landscape.
A group advocating for the interests of tech companies like Meta, Snapchat, and X in Australia has dismissed the ban as “a 20th-century response to 21st-century challenges.”
Digital Industry Group Inc warns that such legislation might drive children towards “dangerous, unregulated parts of the internet,” a concern echoed by several experts.