As people of all ages increasingly live their lives online, governments around the world are focused on establishing regulations to protect children and teens using the internet.
In some countries, like the U.S., a patchwork of state laws has emerged. In other countries, like the United Kingdom, national regulation has become the goal. Case-in-point: the UK’s recently passed Online Safety Act, which was written with the goal of protecting children and teens from harmful content online.
Below, we take a closer look at what the Online Safety Act is, when it goes into effect, and the types of businesses it regulates. We also discuss the law’s key requirements and steps that businesses can take to become and remain compliant.
What is the Online Safety Act?
The Online Safety Act is a UK law that requires social media companies, messaging apps, search engines, and other digital platforms to implement a number of measures designed to “keep the internet safe for children” and adults.
The law, which has been described as “sprawling,” consists of more than 200 clauses outlining various types of illegal or harmful content, what is expected of regulated companies, the consequences of noncompliance, and more.
The law requires regulated companies to:
- Scan for illegal content and remove it from their platform. This includes content related to terrorism, hate speech, self-harm, child sexual abuse, and revenge pornography.
- Prevent children from accessing content that is legal but considered harmful. This includes content that glorifies or encourages eating disorders, as well as content that provides instructions for self-harm and suicide. Content tied to bullying or extreme violence also falls under this umbrella.
- Prevent children from accessing age-restricted content, such as pornography.
- Implement age-assurance measures and enforce age limits.
- Conduct and publish assessments about the risks posed to children by their platforms.
- Provide parents and children with a mechanism to report problems when they are encountered.
The law also makes it easier for adults to control the type of content and users that they see or interact with online. It does this by requiring regulated companies to:
- Enforce the promises they make to users in their terms and conditions agreement
- Allow users to filter out potentially harmful content they don’t wish to see, such as content involving bullying, violence, or self-harm
- Allow verified users to interact only with other verified users if they wish to do so
Which businesses does the Online Safety Act affect?
The Online Safety Act applies to online companies offering two types of services: user-to-user services and search services.
User-to-user services: If a platform allows for user-generated content that can be shared with or discovered by another user on the platform, it falls under the scope of the law. Examples of user-to-user services include social media companies, online dating services, forums, image/message boards, video-sharing services, online and mobile gaming providers, pornography sites, and some messaging apps.
Search services: If an online business is a search engine, or includes search functionality, it is considered a search service under the law. However, the definitions for what counts as a search engine subject to the law are complex. According to the text of the Act, any search engine that “includes a service or functionality which enables a person to search some websites or databases (as well as a service or functionality which enables a person to search (in principle) all websites or databases)” is subject to the law. But search engines that “enable a person to search just one website or database” are not subject to the law.
Importantly, the Online Safety Act does not just apply to businesses based in the UK. Any online business which is accessible to UK users is subject to the law.
When does the Online Safety Act go into effect?
The Online Safety Act officially became law on October 26, 2023 after it received Royal Assent. According to a press release published by the UK government, the law’s requirements will be implemented with a phased approach.
- Phase 1: Ofcom will publish draft guidelines on compliance with the law’s requirements around harmful content on November 9, 2023 and plans to publish a statement on their final decisions in fall 2024, subject to final approval by the government.
- Phase 2: Ofcom will publish draft guidance for sites that host pornographic content, including guidance on age verification, in December 2023. Additional draft codes of practice related to the protection of children will be released in the spring of 2024.
- Phase 3: Ofcom will publish guidelines around additional duties for specific categories of services, such as how regulated companies must deploy user empowerment measures and release transparency reports, in spring 2024.
User verification under the Online Safety Act
In order to comply with the Online Safety Act’s various requirements, businesses must implement processes for verifying a user’s age and identity.
In the UK, data privacy laws require that users must be at least 13 years old in order to join a social media platform without parental permission. The Online Safety Act builds on these protections. In order to ensure that children are not accessing inappropriate content on their platforms, as defined by the law, online businesses must implement a process for estimating or verifying the user’s age. Currently, the law does not specify which estimation methods are acceptable and what they might look like.
The same is true for platforms that allow users to discover harmful content — only instead of 13, companies must verify that users are at least 18 years old. This requirement may be tricky for social media platforms and forums that are not exclusively used for pornography, but which host pornographic content in addition to non-pornographic content (such as X or Reddit).
Ofcom has not yet provided specific guidance on what age verification or estimation processes should look like, but the office is expected to publish recommendations in the coming months.
Adult user verification
Any business considered to be a Category 1 service under the law (e.g., user-to-user services) must offer adult users the option of verifying their identity. Verified users must then be given the option to filter out any non-verified users if they wish to do so.
While the law does not specify what this filtering process should look like, it does specify that it should have the effect of:
- Preventing non-verified users from interacting with content shared, uploaded, or generated by verified users who have filtered out non-verified users
- Reducing the likelihood that the verified user will encounter content shared, uploaded, or generated by non-verified users
As with the age verification requirements discussed above, Ofcom has not yet provided guidance for acceptable or recommended forms of identity verification.
Preparing for the law
Though Ofcom has not yet released guidance as to which processes will be acceptable or recommended for age and adult user verification, online platforms should begin planning for compliance and evaluating potential solutions. Options may include government ID verification, selfie verification, database verification, and other methods.
Want to learn more about how Persona can help you become and remain compliant with the Online Safety Act and other emerging laws? Get a custom demo today.