As people of all ages increasingly live their lives online, governments around the world are focused on establishing regulations to protect children and teens using the internet.
In some countries, like the U.S., a patchwork of state laws has emerged. In other countries, like the United Kingdom, national regulation has become the answer. Case-in-point: the UK’s recently passed Online Safety Act, which was written with the goal of protecting children and teens from harmful content online.
Below, we take a closer look at what the Online Safety Act is and the types of businesses it regulates. We also discuss the law’s key requirements and steps that businesses can take to become and remain compliant.
What is the Online Safety Act?
The Online Safety Act is a UK law that requires social media companies, messaging apps, search engines, and other digital platforms to implement a number of measures designed to “keep the internet safe for children” and adults.
The law, which has been described as “sprawling,” consists of more than 200 clauses outlining various types of illegal or harmful content, what is expected of regulated companies, the consequences of noncompliance, and more. It impacts more than 100,000 online companies and platforms used by individuals in the U.K.
The law requires regulated companies to:
- Scan for illegal content and remove it from their platform. This includes content related to terrorism, hate speech, self-harm, child sexual abuse, and revenge pornography.
- Prevent children from accessing content that is legal but considered harmful. This includes content that glorifies or encourages eating disorders, as well as content that provides instructions for self-harm and suicide. Content tied to bullying or extreme violence also falls under this umbrella.
- Prevent children from accessing age-restricted content, such as pornography.
- Implement age-assurance measures and enforce age limits.
- Conduct and publish assessments about the risks posed to children by their platforms.
- Provide parents and children with a mechanism to report problems when they are encountered.
The law also makes it easier for adults to control the type of content and users that they see or interact with online. It does this by requiring regulated companies to:
- Enforce the promises they make to users in their terms and conditions agreement
- Allow users to filter out potentially harmful content they don’t wish to see, such as content involving bullying, violence, or self-harm
- Allow verified users to interact only with other verified users if they wish to do so
Which businesses does the Online Safety Act affect?
The Online Safety Act applies to online companies offering two types of services: user-to-user services and search services.
User-to-user services: If a platform allows for user-generated content that can be shared with or discovered by another user on the platform, it falls under the scope of the law. Examples of user-to-user services include social media companies, online dating services, forums, image/message boards, video-sharing services, online and mobile gaming providers, pornography sites, and some messaging apps.
Search services: If an online business is a search engine, or includes search functionality, it is considered a search service under the law. However, the definitions for what counts as a search engine subject to the law are complex. According to the text of the Act, any search engine that “includes a service or functionality which enables a person to search some websites or databases (as well as a service or functionality which enables a person to search (in principle) all websites or databases)” is subject to the law. But search engines that “enable a person to search just one website or database” are not subject to the law.
Importantly, the Online Safety Act does not just apply to businesses based in the UK. Any online business accessible to UK users is subject to the law.
When will guidelines for the Online Safety Act be available?
The Online Safety Act officially became law on October 26, 2023, after it received Royal Assent.
Ofcom, the country’s communications regulator, responsible for enforcing the law and providing clarity on its provisions, has split the guidelines and enforcement process into three phases:
- Phase 1: Ofcom published draft guidelines, or “consultations,” on compliance with the law’s requirements around harmful content and legalities on November 9, 2023. Public responses were accepted until February 2024. Ofcom is spending the remainder of 2024 reviewing comments and plans to publish final decisions by the end of 2024, subject to final approval of the legal codes by Parliament. Impacted companies will be expected to create illegal harm risk assessments in response.
- Phase 2: Ofcom published two sets of draft guidelines on protecting children. The first, for sites that host pornographic content, including guidance on age verification in December 2023. Public responses were accepted until March 2024. Final guidelines are expected in early 2025. These will be followed by draft guidelines and a response and review period on specific protections for women and girls. The second set is general guidelines and codes to protect children. It was published on May 9, 2024 with public responses due by July 17, 2024. Final guidelines are expected in the 3rd quarter of 2025, subject to approval of codes by Parliament.
- Phase 3: Ofcom is reviewing additional duties for specific categories of services, such as how regulated companies must deploy user empowerment measures and release transparency reports. The process kicked off in March with a call for evidence from industry and expert groups to help inform the process, a window that closed in May 2024. Draft guidelines and codes will be published in 2025. Additionally, research and advice was published in March 2024 for the government on categorization of services, subject to approval.
User verification under the Online Safety Act
In order to comply with the Online Safety Act’s various requirements, businesses must implement processes for verifying a user’s age and identity.
Age verification
In the UK, data privacy laws require that users must be at least 13 years old in order to join a social media platform without parental permission. The Online Safety Act builds on these protections.
In order to ensure that children are not accessing inappropriate content on their platforms, as defined by the law, online businesses must implement a process for estimating or verifying the user’s age.
Platforms that enable users to experience harmful content, have a slightly different mandate in that they must only verify that users are at least 18 years old. This requirement may be tricky for social media platforms and forums that are not exclusively used for pornography, but which host such content alongside all-age content (such as X or Reddit).
The Online Safety Act, as written, does not specify which age assurance methods are acceptable or what they might look like. Ofcom is responsible for developing these guidelines. The draft guidance published in May 2024 defines “highly effective” age assurance processes as technically accurate, robust, reliable, and fair. Acceptable methods include photo-ID matching, facial age estimation, and reusable digital identity services. Methods such as self-declaration of age and general contractual age restrictions are considered ineffective.
Adult user verification
Any business considered to be a Category 1 service under the law (e.g., user-to-user services) must offer adult users the option of verifying their identity. Verified users must then be given the option to filter out any non-verified users if they wish to do so.
While the law does not specify what this filtering process should look like, it does specify that it should have the effect of:
- Preventing non-verified users from interacting with content shared, uploaded, or generated by verified users who have filtered out non-verified users
- Reducing the likelihood that the verified user will encounter content shared, uploaded, or generated by non-verified users
As with the age verification requirements discussed above, Ofcom is preparing guidance for acceptable or recommended forms of identity verification.
Preparing for the law
Though Ofcom has not yet released guidance as to which processes will be acceptable or recommended for age and adult user verification, online platforms should begin planning for compliance and evaluating potential solutions. Options may include government ID verification, selfie verification, database verification, and other methods.
Want to learn more about how Persona can help you become and remain compliant with the Online Safety Act and other emerging laws? Get a custom demo today.