Table of Contents
- Executive Summary
- Introduction
- A Tale of Two Algorithms
- Russian Interference, Radicalization, and Dishonest Ads: What Makes Them So Powerful?
- Algorithmic Transparency: Peeking Into the Black Box
- Who Gets Targeted—Or Excluded—By Ad Systems?
- When Ad Targeting Meets the 2020 Election
- Regulatory Challenges: A Free Speech Problem—and a Tech Problem
- So What Should Companies Do?
- Key Transparency Recommendations for Content Shaping and Moderation
- Conclusion
When Ad Targeting Meets the 2020 Election
Discriminating in housing ads is against the law.1 Discriminating in campaign ads may not be against the law, but many Americans strongly believe that it is bad for democracy.2
The same targeted advertising systems that are used to target people based on their interests and affinities were used to manipulate voters in the 2016 and 2018 elections. Most egregious, voters who were thought to lean toward Democratic candidates were targeted with ads containing incorrect information about how, when, and where to vote.3 Facebook, Google, and Twitter now prohibit this kind of disinformation, but we don’t know how effectively the rule is enforced: none of these companies publish information about their processes for enforcing advertising rules or about the outcomes of those processes.4
At the scale that these platforms operate, fact-checking ad content is hard enough when the facts in question are indisputable, such as the date of an upcoming election. It’s even thornier when subjective claims about an opponent’s character or details of their policy proposals are in play. Empowering private companies to evaluate truth would be dangerously undemocratic, as Facebook CEO Mark Zuckerberg has himself argued.5 In the absence of laws restricting the content or the targeting of campaign ads, campaigns can easily inundate voters with ads peddling misleading claims on issues they care about, in an effort to sway their votes.
Barack Obama and Donald Trump both owe their presidencies to this type of targeting to an extent, though exactly to what extent is impossible to quantify. The 2008 Obama campaign pioneered the use of voters’ personal information, and his reelection team refined the practice in 2012. In 2016, thanks to invaluable guidance from Facebook itself, Trump ran “the single best digital ad campaign […] ever seen from any advertiser,” as Facebook executive Andrew “Boz” Bosworth put it in an internal memo.6 The president’s 2020 reelection campaign is on the same track.7 More than ever, elections have turned into marketing contests. This shift long predates social media companies, but targeted advertising and content-shaping algorithms have amplified the harms and made it harder to address them.
In the 2020 election cycle, we find ourselves in an online environment dominated by algorithms that appear ever-more powerful and effective at spreading content to precisely the people who will be most affected by it, thanks to continued advances in data tracking and analysis. Some campaigns are now using cell phone location data to identify churchgoers, Planned Parenthood patients, and similarly sensitive groups.8 Many of the risks we’ve articulated in unique examples thus far will be in play, and algorithms likely will multiply their effects for everyone who relies on social media for news and information.
We are entering a digital perfect storm fueled by deep political cleavages, opaque technological systems, and billions of dollars in campaign ad money that may prove disastrous for our democracy.
We need look no further than the bitter debates that played out around political advertising in the final months of 2019 to see just how high the stakes have become.
In October 2019, when Donald Trump’s reelection campaign purchased a Facebook ad claiming that “Joe Biden promised Ukraine $1 billion if they fired the prosecutor investigating his son's company,” Facebook accepted it, enabling thousands of users to see (and perhaps believe) it. It didn’t matter that the claim was unfounded, and had been debunked by two of Facebook’s better-known fact-checking partners, PolitiFact and Factcheck.org.
When Facebook decided to stand by this decision, and to let the ad stay up, Sen. Elizabeth Warren (D-Mass.)—who was running for the Democratic nomination at the time, alongside Biden—ran a Facebook ad of her own, which made the intentionally false claim that “Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election.”9
The ad was intended to draw attention to how easily politicians can spread misinformation on Facebook. Indeed, unlike print and broadcast ads, online political ads are completely unregulated in the United States: parties, campaigns, and outside groups are free to run any ads they want, if the platform or advertising network lets them. This gives companies like Google, Facebook, and Twitter tremendous power to set the rules by which political campaigns operate.
Soon after Warren’s attention-grabbing ad, the New York Times published a letter that had circulated internally at Facebook, and was signed by 250 staff members. The letter’s authors criticized Facebook’s refusal to fact-check political ads and tied the issue to ad targeting, arguing that it “allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy” and could “cause harm in coming elections around the world.”10
Among other demands, the authors urged Facebook to restrict targeting for political ads. But the company did not relent, reportedly at the insistence of board member Peter Thiel.11 The only subsequent change that Facebook has made to this system is to allow users to opt out of custom audiences, a tool that allows advertisers to upload lists of specific individuals to target.12
In response to increased public scrutiny around political advertising, the other major U.S. platforms also moved to tweak their own approaches to political ads. Twitter, which only earned $3 million from political ads during the 2018 U.S. midterms,13 announced in October that it would no longer accept political ads,14 and restrict how “issue-based” ads (which argue for or against a policy position without naming specific candidates) can be targeted.15 Google elected to limit audience targeting for election ads to age, gender, and zip code, though it remains unclear precisely what kind of algorithm will be able to correctly identify (and then disable targeting for) election ads. None of the companies have given any indication that they conducted a human rights impact assessment or other due diligence prior to announcing these changes.16
The companies’ insistence on drawing unenforceable lines around “political ads,” “issue ads,” and “election ads” highlights how central targeting is to their business models.
We might read Twitter and Google’s decisions as an acknowledgment that the algorithms underlying the distribution of targeted ads are in fact a major driver of the kinds of disinformation campaigns and platform weaponization that can so powerfully affect our democracy. However, the companies’ insistence on drawing unenforceable lines around "political ads," "issue ads," and "election ads" highlights how central targeting is to their business models. Facebook’s decision regarding custom audiences signals the same thing: as long as users are included in custom audiences by default, the change will have limited effects.
Ad targeting is just the beginning of such influence campaigns. As a Democratic political operative told the New York Times, “the real goal of paid advertising is for the content to become organic social media.”17 Once a user boosts an ad by sharing it as their own post, the platform’s content-shaping algorithms treat it as normal user content and highlight it to people in the user’s network who are more likely to click, like, and otherwise engage with it, allowing campaigns to reach audiences well beyond the targeted segments.
The major platforms’ newly introduced ad libraries typically allow the public to find out who paid for an ad, how much they spent, and some targeting parameters. They shed some light into targeted campaigns themselves. But it is impossible to know how far messages travel without meaningful algorithmic transparency.18 All we know is that between ad-targeting and content-shaping algorithms, political campaigns are dedicating more resources to mastering the dark arts of data science.
We have described the nature and knock-on effects of two general types of algorithms. The first drives the distribution of content across a company’s platform. The second seeks to identify and eliminate specific types of content that have been deemed harmful, either by the law, or by the company itself. But we have only been able to scratch the surface of how these systems really operate, precisely because we cannot see them. To date, we only see evidence of their effects when we look at patterns of how certain kinds of content circulate online.
Although the 2020 election cycle is already in full swing, we are only beginning to understand just how powerful these systems can be in shaping our information environments, and in turn, our political reality.
Citations
- Fair Housing Act. 42 U.S.C. § 3604(c). source
- Anderson, Janna, and Lee Rainie. 2020. “Many Tech Experts Say Digital Disruption Will Hurt Democracy.” Pew Research Center: Internet, Science & Tech. source; McCarthy, Justin. 2020. In U.S., Most Oppose Micro-Targeting in Online Political Ads. Knight Foundation. source
- This capability was used for voter suppression in both 2016 and 2018. The major platforms now prohibit ads that “are designed to deter or prevent people from voting,” but it is not at all clear how they will detect violations. See Hsu, Tiffany. 2018. “Voter Suppression and Racial Targeting: In Facebook’s and Twitter’s Words.” The New York Times. source; Leinwand, Jessica. 2018. “Expanding Our Policies on Voter Suppression.” Facebook Newsroom. source
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems – Pilot Study and Lessons Learned. Washington D.C.: New America. source
- Zuckerberg, Mark. 2019. “Zuckerberg: Standing For Voice and Free Expression.” Speech at Georgetown University, Washington D.C. source
- Roose, Kevin, Sheera Frenkel, and Mike Isaac. 2020. “Don’t Tilt Scales Against Trump, Facebook Executive Warns.” The New York Times. source
- Wong, Julia Carrie. 2020. “One Year inside Trump’s Monumental Facebook Campaign.” The Guardian. source
- Edsall, Thomas B. 2020. “Trump’s Digital Advantage Is Freaking Out Democratic Strategists.” The New York Times. source
- The ad continued: “if Trump tries to lie in a TV ad, most networks will refuse to air it. But Facebook just cashes Trump’s checks. Facebook already helped elect Donald Trump once. Now, they’re deliberately allowing a candidate to intentionally lie to the American people. It’s time to hold Mark Zuckerberg accountable – add your name if you agree.” Epstein, Kayla. 2019. “Elizabeth Warren’s Facebook Ad Proves the Social Media Giant Still Has a Politics Problem.” Washington Post. source
- Isaac, Mike. 2019. “Dissent Erupts at Facebook Over Hands-Off Stance on Political Ads.” The New York Times. source; The New York Times. 2019. “Read the Letter Facebook Employees Sent to Mark Zuckerberg About Political Ads.” The New York Times. source
- Glazer, Emily, Deepa Seetharaman, and Jeff Horwitz. 2019. “Peter Thiel at Center of Facebook’s Internal Divisions on Politics.” Wall Street Journal. source
- Barrett, Bridget, Daniel Kreiss, Ashley Fox, and Tori Ekstrand. 2019. Political Advertising on Platforms on the United States: A Brief Primer. Chapel Hill: University of North Carolina. source
- Kovach, Steve. 2019. “Mark Zuckerberg vs. Jack Dorsey Is the Most Interesting Battle in Silicon Valley.” CNBC. source
- Defined as “content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome.” In the U.S., this applies to independent expenditure groups like PACs, Super PACs, and 501(c)(4) organizations.
- Its policies prohibit advertisers from using zip codes or keywords and audience categories related to politics (like “conservative” or “liberal,” or presumed interest in a specific candidate). Instead, issue ads can be targeted at the state, province, or regional level, or using keywords and audience categories that are unrelated to politics. See Barrett, Bridget, Daniel Kreiss, Ashley Fox, and Tori Ekstrand. 2019. Political Advertising on Platforms in the United States: A Brief Primer. Chapel Hill: University of North Carolina. source
- Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems – Pilot Study and Lessons Learned. Washington D.C.: New America. source
- Edsall, Thomas B. 2020. “Trump’s Digital Advantage Is Freaking Out Democratic Strategists.” The New York Times. source.
- Rosenberg, Matthew. 2019. “Ad Tool Facebook Built to Fight Disinformation Doesn’t Work as Advertised.” The New York Times. source