II. Challenging Traditional Narratives

Before examining the link between human and cybersecurity vulnerabilities, there are a few narratives and assumptions that must be addressed. These narratives have shaped how policymakers and the public understand digital risk, often obscuring the structural and human factors that may make people vulnerable. Challenging these narratives is essential to building policy that adequately addresses the full scope of today’s cybersecurity landscape. This section outlines these narratives and illustrates how they influence public perception and policy responses to cybersecurity threats.

It Could Never Be Me!

“I never thought I was the kind of person to fall for a scam,” said Charlotte Cowles, a financial advice columnist.1 She was the victim of an elaborate scam that ended with her handing over $50,000 in a taped-up shoe box. She described herself as the opposite of what many would consider “easy” victims of scams: rational, economically secure, with high financial literacy, and a vibrant social life. Yet, she still lost tens of thousands of dollars in one day. Cowles’s case does not directly address the human vulnerabilities discussed in this report, but she exemplifies a mindset shared by many—that it could never happen to me.

According to the Federal Trade Commission (FTC), there was a 25 percent increase in consumer losses from 2023 to 2024 due to fraudulent activity, including scams.2 Scams, defined as a “type of fraud where a victim is deceived into willingly sending money or sharing personal information,” have found a home on social media, which has become the most popular method used by scammers to contact their victims, according to reports from January 2021 to June 2023.3 During this period, $2.7 billion was reported lost to fraud originating on social media, outpacing websites and apps, which accounted for $2 billion in reported losses.

These scams will continue to occur only if everyone believes they are the exception. Many people like Charlotte Cowles believe that they could never become a victim of a scam. The FTC’s figures—which notably include only reported losses, meaning there are more losses that go unreported—reveal a disturbing truth, however, that no one is exempt.

A Focus on the Technical, Even in Policy

While the cybersecurity industry emphasizes technical solutions for identifying, analyzing, and mitigating system vulnerabilities, this focus alone is not sufficient for effective cybersecurity policy that protects all. The prevailing logic centers on securing a company’s system—which is a necessary undertaking, and one that understandably shapes policymakers’ cybersecurity priorities. However, technical solutions can overlook a critical component briefly highlighted in the 2016 RAND Framework for Exploring Cybersecurity Policy Options: the user and their vulnerabilities.4 While the RAND framework was developed to help policymakers draft cybersecurity policy, it also recognizes that users must be better informed about the vulnerabilities that affect them. In the nearly 10 years since the framework was devised, however, this recognition has not translated into sufficient policy action. Policymakers should understand human vulnerabilities when developing frameworks and policies to ensure people are adequately protected.

The User Is the Problem

“The user is the problem” is a narrative that places the victim at fault. “You should have known!” or “Why would you trust that?” are phrases commonly heard by those who have fallen victim to a scam or other form of fraud.5 This is problematic because, while there are protective measures one can take, it is impossible to predict and counter every single cybercrime. This narrative is unproductive in a world that does not currently provide adequate education about how to protect oneself online. In the United States, laws and relevant frameworks have only slowly addressed online risks, such as data privacy or cyber incident reporting, which means that there are still only a few protections (and protectors) for scams and frauds perpetrated online. The FTC helps enforce in the area of consumer protection, but when harm is not easily quantified or defined, the FTC can only do so much.

Only Certain People Get Scammed

The idea that only certain people get scammed, perhaps the most prevalent assumption about cybercrimes, is related to the assumption that “it could never be me!”6 The difference is that policies have been shaped by assumptions that only certain groups, such as the elderly or those who do not have active digital lives, fall prey to digital scams. Yet these assumptions are not true: Younger adults, according to FTC data, were found to be 34 percent more likely to report having lost money to fraud than the elderly.7 Scams cut across every political, demographic, and cultural line and affect people from all walks of life.

The increased use of artificial intelligence (AI) makes this assumption even more harmful. For example, despite internet users’ increasing exposure to AI tools like ChatGPT and AI-generated content, people remain extremely susceptible to AI-enabled scams. There have been a slew of recent high-profile cases in which individuals believed they were chatting with celebrities when they were actually interacting with AI chatbots. During the 2024 U.S. presidential election, thousands of individuals in New Hampshire were discouraged from voting after receiving phone calls they believed to be from President Joe Biden but were really AI-generated robocalls. Not only is the assumption that only certain people get scammed untrue when related to the use of AI, hackers and others with malicious intent are increasingly using AI-generated content to scam. AI, and other digital tools and systems, only exacerbate the issue and further highlight the flawed belief that those with a high propensity to be scammed meet a certain profile.

Most Individuals Have High Digital Skills and Literacy

Given the prevalence of individuals online, there is also an assumption that many people already have high degrees of digital literacy and skills. However, one-third of Americans lack the basic digital skills that are needed to engage successfully in the modern economy.8 A 2023 Pew Research study found that less than 60 percent of U.S. adults answered digital literacy questions correctly.9 Even further, as a 2025 report by the Harvard Business School noted, “At a time when AI is expected to streamline business operations and render some functions obsolete, inexperience with digital technology could limit people’s careers.”10 The shift toward AI is only heightening the urgency of expanding digital skills.

This lack of digital skills, literacy, and access could lead to hypothetical situations like Amy’s or real-world situations like Charlotte Cowles’s. Amy lacked digital access—but equally important, she lacked the digital literacy that could have allowed her to detect the threat and the digital skills to take appropriate precautions.

Citations
  1. Charlotte Cowles, “The Day I Put $50,000 in a Shoe Box and Handed It to a Stranger,” The Cut (blog), New York Magazine, February 15, 2024, source.
  2. “New FTC Data Show a Big Jump in Reported Losses to Fraud to $12.5 Billion in 2024,” Federal Trade Commission, March 10, 2025, source.
  3. Lana Swartz, Alice E. Marwick, and Kate Larson, “Scam GPT: GenAI and the Automation of Fraud,” Data & Society, May 21 2025, source; Emma Fletcher, “Social Media: A Golden Goose for Scammers,” Federal Trade Commision, October 6, 2023, source.
  4. Igor Mikolic-Torreira et al., A Framework for Exploring Cybersecurity Policy Options (RAND, 2016), source.
  5. “Our Words Matter When It Comes to Fraud,” AARP, source.
  6. See Tobi Opeyemi Amure, “The Surprising Truth About the Age Group Most Likely to Fall for Financial Fraud,” Investopedia, April 17, 2025, source.
  7. “Who Experiences Scams? A Story for All Ages,” Federal Trade Commission, December 8, 2022, source.
  8. See Joshua Kendall, Anthony Colavito, and Zach Moller, “America’s Digital Skills Divide,” Third Way, January 12, 2023, source.
  9. Olivia Sidoti and Emily A. Vogels, “What Americans Know About AI, Cybersecurity, and Big Tech,” Pew Research Center, August 17, 2023, source.
  10. Danna Lorch, “America’s Digital Divide: Where Workers Are Falling Behind,” Harvard Business School, February 10, 2025, source.
II. Challenging Traditional Narratives

Table of Contents

Close