Should children and teens have access to social media?
This is, perhaps, one of society’s biggest current debates. Proponents argue that social media offers youth the opportunity to connect with friends and peers to collaborate, learn, and exchange ideas. Opponents, on the other hand, argue that social media has the potential to expose children to harmful content, hate speech, and abuse — never mind worries about screen addiction.
With an eye toward protecting the youth, governments around the world are increasingly seeking to regulate social media platforms by passing laws and regulations that aim to keep children and teens away from harmful content.
One recent such social media update? Australia’s Online Safety Amendment (Social Media Minimum Age) Bill, which was passed by both Houses of Parliament at the end of November 2024.
While details are still a little murky about what exactly this bill means for social media platforms, we wanted to outline the facts we do know so you’ll be in a better position to comply with the bill’s requirements.
What is the Australian social media ban?
Officially titled the Online Safety Amendment (Social Media Minimum Age) Bill 2024, the Australian social media ban refers to a bill that requires some social media platforms operating in Australia to take reasonable steps to keep Australians younger than 16 off their platforms. It is an amendment to the country’s Online Safety Act 2021.
When does the Australian social media ban go into effect?
The Australian social media ban was passed by both Houses of Parliament on November 29, 2024. The government is expected to begin enforcing the bill’s age restriction requirements by the end of 2025. This gives social media platforms operating in Australia roughly 12 months to design and implement a compliant age restriction strategy.
What platforms does it affect?
While the bill is being touted as a social media ban, the text of the bill specifically notes that its age restriction requirements do not apply to all social media platforms. Instead, it defines age-restricted social media platforms as any platform which:
- Has the sole purpose, or a significant purpose, of enabling online social interaction between two or more end-users
- Allows end-users to link to, or interact with, some or all other end-users
- Allows end-users to post material on the service
- Is an electronic service specified in the legislative rules
Although the bill does not specifically call out any platforms by name, many popular social media platforms, such as Facebook, Instagram, and Snapchat, could be considered age-restricted platforms. Ultimately, Australia’s communications minister will make the determination with input from the country’s eSafety Commissioner.
The bill specifically notes that online business interactions are not included. This could mean online marketplaces and other ecommerce businesses that allow users to post content (such as reviews, photos, etc.) are exempt from the bill’s requirements.
Likewise, the bill exempts gaming and messaging platforms, as well as services and apps “that are primarily for the purposes of education and health support — like Headspace, Kids Helpline, Google Classroom, and YouTube.”
What it requires
As noted above, the Australian government has not yet outlined which age assurance methods will be acceptable under the bill. That said, Michelle Rowland, Australia’s Minister for Communications, has stated that the government will spend the next 12 months working with industry and experts to “ensure the minimum age is effectively implemented, informed by the findings of the Age Assurance Technology Trial currently underway.”
Otherwise, the requirements include:
Age assurance
The bill states that age-restricted social media platforms must take “reasonable steps” to ensure that Australians younger than 16 cannot create an account on their platforms. In addition to preventing new accounts from being created by Australians under 16, the bill also requires social media companies to identify and remove accounts created by Australians under 16 that already exist on their platforms.
Although the bill does not specify how platforms must comply with this requirement, it does make an important stipulation. If a platform intends to collect a user’s government-issued ID (including digital IDs) to determine their age, it must also provide alternative methods that are not dependent on the user having an ID.
The Australian government is working with the Age Check Certification Scheme (ACCS), an independent accredited conformity assessment body for age assurance technologies, to evaluate potential solutions. Guidance is expected to come from industry regulators in the next twelve months as to which technologies and methods are acceptable for age assurance. For example, age estimation, made possible through selfie verification, could be one such option available to platforms. Some form of database verification may also be allowed.
Data privacy
Under the bill, if a social media company collects user data for age assurance or age verifications, there are limits to how the company can use this data. In addition to being used or disclosed to determine whether or not the user meets the age threshold established by the bill, it can also be disclosed in situations where an exception exists under paragraph 6.2 of the Australian Privacy Principles or when the user consents to that disclosure.
It’s important to note that in order for a user to consent to disclosure, the consent must be voluntary, informed, current, specific, and unambiguous. The user must also have the ability to withdraw consent at any time.
Platforms are required to destroy collected information once it has been used for the purposes for which it was collected. Failure to do so may result in a complaint under the Privacy Act of 1988.
What are the penalties for non-compliance?
The bill specifies that any platform that fails to comply with the age-restriction requirements or collects forbidden information in age verification will be subject to a civil penalty of up to $49.5 million (Australian dollars).
However, children and teens who circumvent the law will not face penalties or fines, even if they are successful in doing so.
How Persona can help
There’s still a lot up in the air as far as which methods of age assurance the Australian government is going to allow and recommend social media platforms to implement. However, one trend is becoming clear: if your platform operates in multiple jurisdictions, you’ll likely need different age assurance approaches to meet the unique requirements and restrictions in each geography.
With Persona’s flexible identity platform, it’s possible to tailor your age assurance strategy based on the locations you operate within:
- Tailor age assurance by jurisdiction: Leverage Dynamic Flow to ensure you’re collecting the right information and evidence — and performing the right types of verification — for each country and state you operate in, while minimizing friction and maximizing conversion.
- Avoid privacy headaches: Fine-tuned redaction and data retention controls empower you to protect user data and PII, while audit trails facilitate a record of access for compliance purposes.
- Detect and mitigate a variety of fraud: A broad swath of fraud prevention tools — from generative AI detection to passive signal collection to link analysis and more — make it possible to detect other types of fraud threatening your platform and user experience.
Ready to learn more about how Persona can help you get age assurance right, whether you’re operating in Australia, the US, UK, Germany, or other jurisdictions with age restriction requirements? Reach out to book a demo or try Persona today.