Industry

The state of age verification in social media: an overview

Regulators are placing increased scrutiny on social media sites to protect minors and verify ages. Learn how to implement an age verification system.

Icon of people against a light purple background
Last updated:
12/4/2024
Read time:
Share this post
Copied
Table of contents
⚡ Key takeaways
  • There’s a growing consensus worldwide about the risks social media poses to the well-being of children and teens.
  • Public opinion is encouraging lawmakers to propose stricter requirements for social media companies to protect minors online.
  • Currently, no standard legal requirements for age verification exist for social media platforms as there are for other age-restricted industries, such as alcohol and tobacco. However, regulators are placing increased scrutiny on these sites to ensure due diligence.  
  • Social media platforms are increasingly implementing more robust and secure age verification systems, moving beyond basic self-attestation to actual age verification.

In today's hyper-connected world, social media platforms serve as a cornerstone of communication, commerce, and community-building for individuals of all ages. 

In particular, young people are highly engaged on social platforms and often at the forefront of new and emerging sites. For example, a 2022 Pew Research Center survey shows that TikTok — one of the newest social media platforms — is the most popular social media app among teen users between the ages of 13 and 17. The survey shows that 67% of teens use TikTok while only 32% use Facebook (down from 71% in a 2014-2015 survey). 

Amid the benefits of virtual connectivity and new ways to explore online lies a profound concern that social media poses distinct risks to the well-being of children and teens. The U.S. Surgeon General recently advised that social media use among kids and teens should be restricted to limit the mental health risks attached to it. Meanwhile, a recent global survey from Amnesty International collected responses from young people between the ages of 13 and 24 and found that more than half of respondents reported “bad experiences” on social media, including racism, violence, bullying, forms of political persecution, or unwanted sexual advances by other users. 

As society grapples with the challenges of ensuring and enhancing online safety for minors, age verification emerges as a crucial tool in safeguarding vulnerable users. Let's explore the current landscape, regulatory considerations, and technology solutions in this evolving arena.

Protecting children and teens on social media in 2024 

In response to growing concerns about young people and social media use, legislators are proposing new bills that would place tighter restrictions on social media sites. Provisions include restricting children and young teens from signing up for social media accounts without parental consent and requiring adherence to age-appropriate design codes that would protect children from harmful experiences and content. 

Meanwhile, social media sites are bolstering their age verification systems, reconfiguring user experiences for minors, increasing content moderation efforts, and adjusting their algorithms to create safer online environments for young people.

Enhancing online safety for minors 

Numerous online safety bills are pending before Congress that aim to protect young people from sensitive and harmful content such as suicide and self-harm videos, violent content, and overly sexual or pornographic comments, images, and videos. 

In response to increasing public scrutiny, social media platforms are taking measures to protect minors from harm. Below are a few examples of sites adjusting their content for young users.

  • Instagram’s “Sensitive Content Control” enables people to choose how much or how little sensitive content they see from accounts they don’t follow. All teens under 16 default to the “less” state unless they explicitly opt out. Instagram also hides certain sensitive content (self-harm, adult nudity, and eating disorder-related posts) from users under the age of 16, even if it’s been shared by someone they know.  
  • For teens between the ages of 13 and 15, TikTok only shows content created by other teens under 16 in the “For You” feed and defaults teen accounts (under the age of 16) to “private.” 
  • Meta made changes to its algorithm to ensure teens and children on Facebook and Instagram are not shown content that contains harmful images, videos, and text such as drug-related and self-harm content. 

Ensuring age-appropriate experiences 

In addition to protecting children from explicit harm, the public and lawmakers are increasingly focused on pushing social media sites to design and deliver age-appropriate experiences for children. Notably, in late 2023, more than 30 states filed a federal lawsuit against Meta alleging that the site is designed to be addictive to young users. 

In response, many social platforms have created separate platforms or experiences for younger users:

  • YouTube created YouTube Kids, a separate kid-friendly version of its platform specifically designed for young users. YouTube also disables its auto-play feature (which some consider addictive) for children under a certain age.
  • TikTok sets daily screen time limits to 60 minutes, restricts push notifications at night, and prohibits “live content” among teen users.

The current social media regulatory landscape

Multiple laws limit the use and retention of children’s data in the U.S. and EU. While these laws don’t specify how sites must verify users’ ages, regulators are closely examining social media platforms (and other online spaces) to ensure an auditable system is in place that demonstrates some level of due diligence. Companies are expected to have policies and procedures to identify underage users on their platforms, and in some cases, consumers can submit complaints that will be reviewed by regulators. For example, under COPPA, users can submit complaints to the FTC

Meanwhile, in response to growing public pressure and emerging research about the risks attached to social media use among children and teens, lawmakers worldwide are proposing stricter measures to govern social media sites. These laws would also require the use of age verification technology to be in compliance. 

U.S. regulations 

In the U.S., the Children’s Online Privacy Protection Act (COPPA) is the prevailing federal law protecting children under 13 from online harm. The rule requires operators of commercial websites and online services to obtain parental consent before collecting personal information from children under 13. The law also requires websites and online service providers to maintain the confidentiality, security, and integrity of the information they collect from children and retain it only as long as necessary. 

There is no federal law requiring social media sites to set an age minimum for account access, however, a bipartisan bill, the Protecting Kids on Social Media Act, is in motion. The bill would require parental consent for minors and set a minimum age of 13 to use social media sites. It would also require social media companies to “take reasonable steps beyond merely requiring attestation” to verify users’ ages. 

In addition to mandating age thresholds for social media sites, there’s also bipartisan support for the Kids Online Safety Act KOSA, a federal bill that would require online service providers like social media sites to protect minors from online harm such as sexual exploitation, bullying, and harassment. Online services would also need to protect minors from viewing content such as self-harm videos, predatory marketing, and eating disorder-related videos, images, and comments. The bill also proposes requirements for online services to default users under 18 to the highest privacy and safety settings. 

State-specific regulations

In California, lawmakers have enacted stricter rules that go beyond COPPA to further protect the privacy of children and teens online, including: 

  • California Consumer Privacy Act (CCPA). Under CCPA, businesses cannot sell the personal information of consumers' under the age of 16 without consent. For children under the age of 13, consent must be provided by a parent or guardian. 
  • California Privacy Rights Act (CPRA). The CPRA extends the limitations of the CCPA to not only the selling but also the sharing of personal data. 
  • California Age-Appropriate Design Code Act (CA AADC). This law goes into effect in July 2024 and requires any website used by children under 18 to implement age-appropriate design code principles, such as blocking adults from sending private messages to minors. The law also prohibits businesses from collecting, selling, sharing, or retaining any personal information of children under 18 that is not necessary to provide the online service. 

In the absence of national legislation requiring age thresholds for social media use, several states are working to pass their own laws to set age minimums and age verification requirements for social media sites, however, none of these bills are yet in effect. For example, Utah, Ohio, Louisiana, Florida, and Arkansas have all passed bills requiring social media sites to limit children and teens under a certain age from signing up for an account without parental consent, and several of these laws require the use of age verification technology. 

Many other states are introducing bills requiring social media companies to create safer online environments for children and teens. For example, Texas passed the Securing Children Online through Parental Empowerment Act (SCOPE) Act, which requires that social media sites allow parents to alter their children’s settings and limit their screen time. The law is scheduled to go into effect in September 2024. 

EU regulations

In the EU, the General Data Protection Regulation GDPR, which came into effect in 2018, prohibits the online processing of children’s data without parental consent and requires age verification to confirm users’ ages. Likewise, the Audiovisual Media Services Directive (AVMSD) requires all services with audiovisual content to adopt appropriate measures to protect children from harmful content and includes a requirement for age verification. While both laws require age verification, no standard exists for how websites and social platforms must verify users’ ages. 

There is no EU law restricting teens and children under a certain age from creating social media profiles, however, certain European countries have introduced their own laws that place age thresholds on social media sites. For example, in 2023, France approved a law that requires social media platforms to verify users’ ages and obtain parental consent for those under the age of 15. 

Multiple laws are in place (or in progress) requiring social media companies and other online service providers to establish safer online environments for children and teens. For example, the Online Safety Act in the UK requires social media companies, messaging apps, search engines, and other digital platforms to implement a number of measures to protect children from harmful content. Under the law, companies must take measures to prevent children from accessing content that is considered harmful, such as self-harm and suicide videos or content tied to bullying and extreme violence. The law also requires companies to implement age assurance methods to enforce age limits. 

Similarly, the EU’s Digital Services Act (DSA) requires online platforms, hosting providers, and other intermediaries to create safer digital spaces. The DSA included provisions intended to protect users from online harassment and cyberbullying.

Age verification technologies: from basic to advanced 

Social media sites currently use various technologies and methods to verify user ages. Some of these methods are simple while others are more robust. 

Self-attestation

Self-attestation is not considered a form of age verification, but it is often used as a starting point for social media sites to check users’ ages. The user simply states their age but does not provide any evidence to support their attestation. Typically, self-attestation takes place during account onboarding when the user enters their personal information to create a profile. Again, because self-attestation relies on the honor system, it’s easy for underage users to slip through and create an account undetected. 

Credit card check 

A user with access to a credit card can submit their credit card information to verify their age. This information is then checked against a database such as a credit bureau. This option is often only viable for those over the age of 18 who can open their own credit card account. 

Government ID document check

A user with access to a government-issued identity document such as a passport, driver’s license, or ID card can submit a photo or scanned copy of their ID. The system confirms that the ID is authentic by checking for certain features of a government ID such as watermarks, holograms, and stamps. The information from the image of the ID is extracted and compared to the information the user provided.  In some cases, this information is cross-referenced against government databases or other authoritative sources to verify the user’s age. 

Selfie verification

Some systems integrate another layer of verification to ensure that the government ID matches the person submitting the information. This typically involves the user uploading a selfie or a selfie video that is compared to the provided ID.  Depending on the capabilities of the age verification system, the user-submitted selfie or video is also analyzed for liveness detection, whereby an advanced algorithm analyzes a variety of data in real time to determine if the subject is a real, living person or a spoof. 

Selfie age estimation 

A user uploads a photo or selfie that AI models analyze to estimate the user’s age. This option is typically used in combination with other age verification methods. 

AI adult classifier 

An AI model is trained to spot signals that a user may be under a certain age. The AI combs through comments on posts to spot inconsistencies between what a user claimed as their age with content and comments on their profile that might prove otherwise. For example, if someone writes “Happy 12th birthday!” on someone else’s post, the profile would be flagged as a potential underage user.

Webinar
Experts discuss age verification processes and tools

Implementing age verification: key considerations

While no legal standard currently exists for how social media sites must verify users’ ages, it’s critical that businesses adopt defensible age verification strategies to prove due diligence. Below are a few key questions social media sites should consider when building a defensible age verification strategy.

  1. Does your system account for the potential for age fabrication? Self-attestation is not a defensible process because users can easily lie about their age when creating an account or profile. Therefore, your system must take steps to actually verify a user’s age beyond their own attestation.
  2. Does your system unfairly limit users from accessing your site? Some users may not have access to credit cards or government-issued IDs. It’s therefore important that your age verification process is flexible and offers multiple options for users to submit proof of their age. 
  3. Does your age verification system protect user data? Regulations, including the GDPR and COPPA, limit how data is collected and stored. It’s essential that your age verification system is in compliance with these laws. 

While a robust age verification system is essential for defensibility, balancing robustness with user experience is important. If your age verification system is cumbersome or difficult to navigate, the user could drop off during onboarding. Focus on finding configurable systems that enable you to customize workflows and meet audience expectations for an easy-to-understand process free of burdensome requests. Systems that allow you to guide users through the onboarding flow will also help reduce confusion and potential user drop-off. 

In addition to creating an optimal user flow, a configurable system also enables businesses to adapt quickly to changing regulatory requirements. Instead of building a new user flow from the ground up, companies can leverage modular tools to adjust user flows and comply with new requirements. 

Addressing privacy concerns attached to age verification

Some civil liberties groups are raising the flag regarding data privacy and age verification across the internet — not just for social media platforms. These groups believe data collected during the age verification process could be stolen by bad actors and that collecting such data at scale puts people’s privacy at risk. Social media sites can mitigate these risks and maintain compliance with existing laws like COPPA and GDPR by:

  1. Ensuring data minimization by only collecting necessary data. 
  2. Using data only for the intended purpose and disclosing that purpose to the user.
  3. Storing data only for a limited amount of time. 
  4. Limiting data access to only those whose job functions require access. 
  5. Implementing data encryption methods. 
Free guide
Learn about age verification's biggest challenges — and how to find a solution that meets your needs
Download now

Ensuring a trustworthy social media environment

Even when it isn’t required by law, age verification establishes and strengthens trust and safety on social media platforms, ensuring users have the best experience possible. 

Protecting children and teens online from harmful and abusive content or experiences they are otherwise not ready for is an important tenet of fostering a trustworthy and safe social media environment. 

With a robust age verification process, social media sites can demonstrate their commitment to protecting young users from potential harm and live up to their declared ethics, values, and community guidelines which aim to create a safe and inclusive space for all users. 

Need robust, reliable, and compliant age verification? Contact us to learn more or get started for free.

Published on:
5/24/2024

Frequently asked questions

How does age verification work?

Age verification works by requiring customers to prove their age before giving them access to products or services, such as alcohol or lottery tickets. It can look very different depending on whether a person’s age is being verified in person or electronically.

In-person age verification

If an age-restricted item or service is being purchased in person — for example, at a brick-and-mortar liquor store — the buyer will usually be required to present the seller with a government-issued photo ID, such as a driver’s license or passport, that lists their date of birth. 

The seller will then typically check the ID for signs of tampering and to ensure that any special design features (such as holographic foils and text) are present. They will also check the photo on the ID to make sure it matches the face of the person in front of them. 

Some IDs include security features, such as barcodes or NFC chips, which are specifically designed to help weed out fake or forged IDs. These can be scanned to help the seller determine whether or not an ID is legitimate. 

Electronic age verification

When a customer wants to purchase an age-restricted item or service electronically, such as through a website or mobile app, the seller must still verify their age. This is usually accomplished through government ID verification. 

Government ID verification usually works like this:

  • The user is prompted to take a photo of their driver’s license (or other accepted ID)
  • The ID is analyzed for authenticity
  • The user is prompted to take a selfie or series of selfies
  • The photo in the ID is compared against the selfie(s) to ensure that it was not stolen
  • Information is extracted from the ID and checked against the information provided by the user; the system then automatically determines whether or not the person is of legal age to complete the purchase 

In states where mobile driver’s licenses are issued and accepted, a customer can use these instead of taking a photo of their physical ID.

Can Social Security numbers be used to verify age?

One form of SSN verification works by collecting a user’s Social Security number, name, and date of birth, and then comparing them against official records on a pass/fail basis. With this in mind, it can potentially be used to verify a user’s age (by confirming whether or not the date of birth provided by the user is, in fact, truthful).

Of course, SSN verification cannot detect instances of identity theft. With this in mind, it’s often paired with other verification methods.

What are examples of age gating?

On websites, “age gates” typically take the form of check boxes or form fields that ask customers to either confirm that they’re at least a certain age — such as 18 or 21 — or enter their date of birth to prove their age.

Some examples of age gating include:

  • A beer company asking "Are you 21 years of age or older?" and the individual having to click "yes" to enter the site
  • An online betting platform asking the individual to enter their date of birth before they're allowed access
  • An adult entertainment site requiring individuals to select their age from a drop-down menu before proceeding

Continue reading

Continue reading

Share codes: Digitizing the UK right to work
Share codes: Digitizing the UK right to work
Industry

Share codes: Digitizing the UK right to work

Before any UK company hires a non-UK citizen, it must verify that the individual has the right to work in the country. Share codes are a key step in this process.

Workplace identity proofing: Methods & best practices
Workplace identity proofing: Methods & best practices
Industry

Workplace identity proofing: Methods & best practices

Workplace identity proofing can help employers mitigate risks associated with employment fraud. Here are 5 best practices to guide your identity proofing.

Best practices for merchant onboarding
Best practices for merchant onboarding
Industry

Best practices for merchant onboarding

Merchant onboarding is a set of processes that payment service providers undertake to vet merchants before doing business with them. Learn more.

No items found.

Ready to get started?

Get in touch or start exploring Persona today.