The Path Forward: Minimizing Potential Ramifications of Online Age Verification

All children—and adults—should be able to access online spaces and interactions safely, securely, and in a rights-respecting manner. The conversation around advancing kids’ safety online is complex and requires thoughtful nuance to ensure strategies address core concerns. Age verification requirements can create a cascading impact on how all users access online content, as each age assurance and verification method comes with its own trade-offs for user rights, data privacy, and security.

Alternative approaches that optimize user choice, privacy, and control over their online experience may be more feasible and efficient at improving children and teen safety online than age verification mandates. As state and federal legislators explore age verification, the Open Technology Institute offers five recommendations for navigating potential ramifications of such mandates and for moving forward on addressing youth safety online.

1. Explore Alternative Solutions That May More Effectively Address Concerns Surrounding Youth Online Safety

Improving youth experiences online requires a holistic approach. Using a mix of alternative methods to improve youth—and general user—safety online may more effectively and directly address concerns about access to age-inappropriate materials and the negative impact of online spaces. Ultimately, age verification is no substitute for privacy protections and increased user transparency and control.

Growing concerns over social media’s impact on youth mental health and well-being have driven the bulk of age verification and other youth-focused online safety bills.1 Yet, these concerns are complex and no single technology solution can or will adequately address what are ultimately social challenges.2 It is important to evaluate whether or not age verification requirements can effectively address the core concerns before moving forward with legislation.

Given the challenges and risks of age verification mandates, more feasible and effective methods for advancing children’s safety online should be explored. First and foremost, comprehensive federal data privacy legislation, such as the American Data Privacy Protection Act, remains the best method for protecting children, and all users, online.3 Such legislation would require stronger data minimization, limit the ability of companies to use the data they do collect, and create special protections for sensitive data like biometric information and precise geolocation data.

Other avenues, such as requiring platform transparency, customizable design features, or safety- and security-by-design principles, can offer users greater insight and control over the algorithms that impact their experiences online while standardizing a base level of data privacy and security. These methods could allow parents, youth, and all other types of users to tailor their default settings and the content they see online to better fit their needs.

2. Design for User Privacy and Choice When Building Age Verification Technology

In online spaces in which age verification is absolutely necessary, strict age verification that optimizes user privacy through data minimization and user choice via standardizing third-party facilitation and best practices can be used to implement age restrictions.

Age verification is incompatible with user needs and expectations for anonymity online and is likely to raise constitutional concerns. Mandates for age verification can infringe on user rights and put their privacy at risk. This is especially concerning as current age verification practices require users to share a government-issued identification, which could disproportionately impact vulnerable communities and access to politicized content. In spaces that require strong authentication needs or present clear precedent for age-based restrictions (such as engaging in online gambling or purchasing alcohol and tobacco products), strict age verification that uses data minimization principles and third-party facilitators can offer a rights-respecting method for implementing age restrictions.

Although the French Commission on Information Technology and Liberties (CNIL) concluded that no solution fully met their privacy criteria, their 2022 report discusses a proof of concept that shows it is possible “through a third-party system, to guarantee the protection of the individual’s identity and the principle of data minimisation, while maintaining a high level of assurance on the accuracy of the data transmitted.”4 Using two cryptographic concepts (group signatures, and zero knowledge proofs), researchers built “a possible implementation of an age verification system that allows accessing restricted websites without sharing other personally identifiable data.”5 In other words, a system could be used in which the website only learns the age (or age range) of the visitor and the age verifier learns nothing about the site requesting the verification. This work shows that privacy-respecting age verification is possible via the use of existing and well-understood cryptographic principles.

Standardization of strict age verification can foster a varied ecosystem of third-party age assurance providers that enables greater user choice in who is verifying their age, promotes greater safety and security measures through competition, and avoids concentrating verification solely within a few large tech companies. However, little conclusive work has been done so far in this area.6 Even if a standard is agreed upon, there must be enough critical mass behind its use to actually make such a system useful for every type of site that may need to verify a user’s age. As CNIL demonstrated, it is already technically possible to build an age verification system that assures privacy, but in the absence of an established and widely adopted protocol, it is unlikely that strict age verification can be widely done at scale in privacy-preserving ways.

Until there is a secure standard, age verification should be accompanied by security- and privacy-by-design practices, and online operators should offer users a variety of methods to confirm their age.

3. Require Greater Transparency and Agency over User Experience

Platforms are moving ahead with alternative approaches to protecting youth from potentially harmful content and interactions online, such as limited asks for hard-identifiers, age-specific features, and parental controls. These approaches should be evaluated for both potential benefits (greater transparency and agency over online experiences) and risks (data privacy and constitutional concerns) to highlight promising approaches to youth safety online.

As detailed in the Social Media Platforms and Age-Appropriate Practices section of this report, many platforms integrate age-specific features for users between the ages of 13 and 18 years old. These can include default privacy settings on accounts; app usage dashboards and settings; and restrictions for posting content, sending and receiving messages, accessing promoted and recommended content, and limiting screen time. In addition, parental controls and linked accounts can help assuage some parents’ concerns by allowing them greater supervision and more decisions in their child’s online experience.

While these features respond to current concerns about access to age-inappropriate material and the potentially addictive nature of technology, it is important to note their limitations. These features are not activated unless the associated account age is accurate. Additionally, parental controls place a high burden on parents, who do not always have the capacity, willingness, or digital skills to effectively use parental monitoring tools—which most do not even use.7 When in use, increased surveillance of kids online may exacerbate digital abuse by allowing children and teens to be subjected to extreme monitoring and control over their online presence.8 This could be particularly dangerous for LGBTQ+ youth, those seeking access to reproductive health care, or those experiencing sexual, physical, or emotional abuse at home.

When advocating for safer, healthier online spaces for youth, legislators and civil society should evaluate existing approaches to creating age-appropriate online environments and the associated risks to be addressed to highlight successful techniques that can be adopted across online operators.

4. Understand That Content-Based Restrictions Will Have Unintended Consequences for People from Vulnerable Communities

Content-based restrictions will face strict constitutional scrutiny and should be used sparingly to avoid allowing the politicization of content to drive mandates that change the nature of the internet and disproportionately impact vulnerable communities.

Much of age verification legislation stems from ongoing conversations about what information is appropriate or not appropriate for young people to access. While content-based restrictions will face strict constitutional scrutiny, any allowances to restrict speech in the name of protecting children can have far-reaching consequences for freedom of expression and access to information.

As legislators and courts determine the scope of age verification requirements, sensitive or politicized topics, like those surrounding gender, sexuality, race, and reproductive health care, may become targets to censorship or age-gating.9 Allowing the politicization of content to drive age verification requirements can set a dangerous precedent for years to come, leaving users and companies responding to changing considerations of what is age-appropriate or not.

5. Invest in Cross-Sector Research and Collaboration to Create Standardized Best Practices and Protocols for Age Verification

More research is needed to fully understand the potential impacts of age verification and implementation. Insights from industry, civil society, regulators, and users of all ages should be taken into consideration to create standardized best practices and protocols for age verification.

Governments and societies should carefully consider how age verification may unintentionally impact users. Mandates will increase the frequency at which people are asked to provide government-issued identification to access online spaces and may desensitize users to requests for personal and sensitive information. Along with a lack of clarity about what constitutes age-appropriate material, this could lead to an increase in requests for age verification, even in online spaces in which identification is normally neither required nor needed, as well as associated scams. Governments should play a role in determining the standard of verification and identification online, the role of digitized or digital IDs, and alternative age verification processes for people that lack traditional identification.

To further mitigate the negative impacts of age verification on users, cross-sector collaboration is needed to understand the full range of implications, develop best practices, and standardize protocols. This work is in progress at various stages.

The Digital Trust & Safety Partnership outlined five guiding age assurance principles and best practices that put user choice, safety, and needs at the forefront of age assurance practices. Google’s recent Legislative Framework to Protect Children and Teens Online offers thoughtful considerations to improve youth experiences while minimizing user risk and ensuring oversight and accountability. Previous projects at the International Organization for Standardization and Institute of Electrical and Electronics Engineers could be revived to develop common technical standards for conducting and facilitating age verification at acceptable levels of efficacy, privacy, and security.

Cross-sector collaboration provides opportunities to include the perspectives of actors and users of all ages in crafting design approaches and legislation.

Citations
  1. Health Advisory on Social Media Use in Adolescence (Washington, DC: American Psychological Association, May 2023), source.
  2. Candice L. Odgers, “The great rewiring: is social media really behind an epidemic of teenage mental illness?” Nature, March 29, 2024, source; danah boyd, “KOSA isn’t designed to help kids,” Medium, January 31, 2024, source.
  3. Research for this report was completed prior to the unveiling of the American Privacy Rights Act, which would establish federal data privacy protections for Americans. The bill classifies data for users under the age of 17 as sensitive data, which could potentially lead online operators to institute age verification requests. See: House Committee on Energy and Commerce, “Committee Chairs Rodgers, Cantwell Unveil Historic Draft Comprehensive Data Privacy Legislation,” U.S. House of Representatives, April 7, 2024, source.
  4. “Online age verification: balancing privacy and the protection of minors,” Commission Nationale de l’Informatique et des Libertés, September 22, 2022, source.
  5. Jérôme Gorin, Martin Biéri, and Côme Brocas, “Demonstration of a privacy-preserving age verification process,” Laboratoire d’Innovation Numérique de la Commission Nationale de l’Informatique et des Libertés, June 23, 2022, source.
  6. Age Check Certification Scheme, “ISO Working Draft Age Assurance Systems Standard,” euCONSENT, November 2021, source.
  7. Naomi Nix, “Meta says its parental controls protect kids. But hardly anyone uses them,” Washington Post, January 30, 2024, source.
  8. “Types of Abuse,” National Domestic Violence Hotline, source.
  9. Hannah Natason, “Objection to sexual, LGBTQ content propels spike in book challenges,” Washington Post, June 9, 2023, source; Hannah Natason, “Half of challenged books return to schools. LGBTQ books are banned most,” Washington Post, December 23, 2023, source; “Defending Our Right to Learn,” American Civil Liberties Union, March 10, 2022, source; John Villasenor, “Can a state block access to online information about abortion services?” Brookings Institution, July 27, 2022, source.
The Path Forward: Minimizing Potential Ramifications of Online Age Verification

Table of Contents

Close