Social Media Platforms and Age-Appropriate Practices

In response to concerns about children’s experiences online from parents and teens, schools, legislators, and regulators, state and federal age verification legislation is beginning to focus on social media platforms.1 Growing evidence shows that while not inherently bad for youth, social media can facilitate and exacerbate challenges to children’s mental health and safety online.2 While more efforts are needed to ensure children can safely and securely access online spaces, age verification mandates present various challenges and may not actually address the root concerns surrounding social media use.

Already, websites and social media platforms implement a variety of age assurance practices to enforce previously established legal age restrictions—such as the Children’s Online Privacy Protection Rule (COPPA) or age restrictions related to online gambling and alcohol and tobacco sales—and to uphold their own account age requirements. For example, when age restrictions are mandated by law, online operators may use hard identifiers such as photo ID or credit cards to confirm a user’s age, which is similar to age assurance practices taking place in the physical world.

In the absence of age limits set by law, such as platform’s account holder age requirements, many platforms and websites rely on self-declaration. This is usually done by asking users to input their date of birth when creating an account, link to an existing account with date of birth information, or simply check a box to confirm that they are the required age.

However, these methods aren’t foolproof, as users can simply declare they are of the required age when they are not. Age verification legislation intends to close these loopholes but leaves online platforms grappling to respond to concerns about children’s access to social media and age-inappropriate material while minimizing potential risks of age verification.

As a result, platforms have employed a variety of strategies to create safer online spaces for children and teens, such as requiring age verification only when an account holder is suspected to be underage, introducing age-specific features for users, and creating parental controls. These strategies have their own trade-offs and considerations for user rights, data privacy, and security, but they may offer insight for more direct and effective strategies for promoting kids safety online than those of age verification mandates.3

Detecting and Verifying Under-Age Accounts

To identify users who do not self-declare their age accurately, some social media companies are incorporating measures to flag when a user may be under the required age. For example, TikTok scans public videos of users to help determine account holders’ ages.4 Meta uses artificial intelligence to detect underage account holders based on account activity and linked profiles.5 Additionally, both Meta’s Instagram and Facebook platforms allow users to report accounts suspected to be held by an underage user.6 If a user tries to change their self-reported age or has been identified as being underage, platforms, including Pinterest, Discord, TikTok, and Google, require users to verify their age with a government-issued ID, credit card, or a live photo.7 When users try to edit their account age from under 18 to over 18 years old, Meta’s Instagram requires them to verify their age by submitting a government-issued ID, recording a video selfie to be analyzed by age-estimation AI, or asking mutual friends to vouch for their age.8 This strategy may reduce the personal or sensitive data that users need to share with a platform to verify their age by only requesting verification of account holders suspected of being under the required age. However, methods used to detect these underage account holders may subject users to intrusive surveillance and monitoring of online activity and incorrectly flag account holders as being underage.

Age-Specific Design Features

Some platforms employ age-specific features to protect youth from potentially harmful content and interactions online. For example, Roblox is working to incorporate an age verification feature that will allow users 13 years of age and older to submit a government-issued photo ID and a selfie to verify their age to “access innovative social capabilities and age-appropriate content.”9 Enabling account restrictions on Roblox will lock an account’s contact settings to block messages and chats from other users and limit play to experiences recommended for all ages.10

Google offers a suite of digital well-being tools that allows all users to set daily limits and timers on apps, customize or turn off notifications, and set bedtime reminders—some of which are turned on by default for users who are 13 to 17 years old on YouTube.11 In addition, Google has specific ad policies for teens that restrict personalized ads or ads containing sensitive content.12

Snapchat implements specific default settings for teens, including limiting contacts to friends and existing phone contacts, restricting location sharing, and sending in-app reminders about privacy and safety settings.13 Similarly, TikTok has a variety of age-specific features, such as prohibiting users under 13 years old from posting videos or comments, setting accounts held by 13 to 15 year olds to private by default, and restricting live streaming and direct messaging for users under the age of 16.14 In March 2023, TikTok introduced new age-specific features, including an automatic 60-minute screen time limit for users under 18 and created a screen time dashboard and controls for all users.15

In January 2024, Meta released new policies for teens, hiding age-inappropriate content, limiting content recommendations, and defaulting content recommendations to the most restricted settings.16 While these features help customize a safer and healthier online experience for young people, these features are not activated unless an account is created with the correct age.

Parental Controls

Companies are also creating more opportunities for parents to play a greater role in supervising their child’s online activity. Many platforms already implement options for parents to set restrictions, monitor, enable permissions, and link accounts for their children’s accounts.

TikTok’s Family Pairing allows parents to link their TikTok account to their child’s account to manage settings for various features, including account discoverability, searches, direct messaging, and screen time.17 Similarly, Google’s Family Link allows parents to manage parental controls such as SafeSearch and edit settings on YouTube Kids and YouTube accounts.18

Apple’s Family Sharing allows parents to create Apple IDs for their children and set parental controls and receive warnings about sensitive content sent or received by a child’s account.19 In 2022, Snapchat introduced its Family Center tool that allows parents to view their teen’s privacy and safety settings, manage parental controls, and restrict sensitive content.20 Likewise, Discord’s Family Center allows parents to see who their child is talking to on the platform, what forums of which they are a part, and newly added friends.21 In 2023, Meta began launching new parental supervision features on Facebook and Instagram that allow parents to see with whom their child is friends or messaging through both apps.22

While parental controls offer greater insight and supervision into their child’s online life, these controls may negatively infringe upon a young person’s privacy and enable unnecessary surveillance of their online activity.

Citations
  1. Monica Anderson and Michelle Faverio, “81% of U.S. adults – versus 46% of teens – favor parental consent for minors to use social media,” Pew Research Center, October 31, 2023, source; Donna St. George, “Schools sue social media companies over youth mental health crisis,” Washington Post, March 19, 2023, source; Cristiano Lima-Strong and Naomi Nix, “41 states sue Meta, claiming Instagram, Facebook are addictive, harm kids,” Washington Post, October 24, 2023, source.
  2. Health Advisory on Social Media Use in Adolescence (Washington, DC: American Psychological Association, May 2023), source; Georgia Wells, Jeff Horwitz, and Deepa Seetharaman, “Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show,” Wall Street Journal, September 14, 2021, source; Social Media and Youth Mental Health: U.S. Surgeon General Advisory (Washington, DC: U.S. Department of Health and Human Services, 2023), source.
  3. The social media strategies shared in this report are not a comprehensive view of all the strategies employed by social media platforms, but instead are intended to provide a snapshot of what popular social media platforms are doing to improve youth experiences online.
  4. Sarah Perez, “TikTok CEO says company scans public videos to determine users’ ages,” TechCrunch, March 23, 2023, source.
  5. Pavni Diwanji, “How Do We Know Someone Is Old Enough to Use Our Apps?” Meta, July 27, 2021, source; Erica Finkle, Sheng Lou, Christine Agarwal, and Dave Fryer, “How Meta uses AI to better understand people’s ages on our platforms,” Meta, June 22, 2022, source.
  6. “Report a child under 13 on Instagram,” Instagram Help Center, source; “Report an Underage Child,” Facebook Help Center, source.
  7. Regarding age restriction rules, Google’s YouTube Official Blog states, “If our systems are unable to establish that a viewer is above the age of 18, we will request that they provide a valid ID or credit card to verify their age. We’ve built our age-verification process in keeping with Google’s Privacy and Security Principles.” However, Google does not detail what methods their systems use to establish a viewer’s age. See: The YouTube Team, “Using technology to more consistently apply age restrictions,” YouTube Official Blog (blog), Google, September 22, 2020, source.
  8. “Introducing New Ways to Verify Age on Instagram,” Meta, June 23, 2023, source.
  9. “Age ID Verification,” Roblox, source.
  10. “Account Restrictions,” Roblox, source.
  11. “Tools to help you achieve your own personal sense of digital wellbeing,” Google, source; James Beser, “New safety and digital wellbeing options for younger people on YouTube and YouTube Kids,” Canada Blog (blog), Google, August 10, 2021, source.
  12. “Ad-serving protections for teens,” Google Advertising Policies, source.
  13. “Safeguards for Teens,” Snapchat, source.
  14. “Parents’ Ultimate Guide to TikTok,” Common Sense Media, December 14, 2022, source.
  15. Cormac Kennan, “New features for teens and families on TikTok,” TikTok, March 1, 2023, source.
  16. “New Protections to Give Teens More Age-Appropriate Experiences on Our Apps,” Meta, January 9, 2024, source.
  17. “User safety,” TikTok, source.
  18. “Understand YouTube & YouTube Kids options for your child,” YouTube for Families Help, source; “Filter or blur explicit results with SafeSearch,” Google Search Help, source.
  19. “Family Sharing. Share your favorite things with your favorite people,” Apple, source; “Set up parental controls with Family Sharing on iPhone,” iPhone User Guide, source.
  20. “Tools and Resources for Parents,” Snapchat, source.
  21. “Stay Connected With Your Teen Using Discord’s Family Center,” Discord Blog (blog), July 11, 2023, source.
  22. “Giving Teens and Parents More Ways to Manage Their Time on Our Apps,” Meta, June 27, 2023, source.
Social Media Platforms and Age-Appropriate Practices

Table of Contents

Close