Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Human Rights: Our Best Toolbox for Platform Accountability

Viewed through the lens of human rights standards, the COVID-19 infodemic is exacerbated by social media platforms’ failure to align the full range of their business operations with a commitment to human rights. Over time, their surveillance-based business models can corrode a society’s information environment, poisoning the flow of information on which deliberative democracy depends as well as creating the conditions for human rights violations, or worse.

As globally dominant social media platforms, Facebook, Google and Twitter have also played a positive role in advancing global human rights, which is important to acknowledge: They enable free expression and the global free flow of information by providing opportunities for a wide range of speech, about politics, health, and practically anything else. Laudably, they have taken significant, if uneven and imperfect, steps to shield users around the world from acts of government censorship and surveillance that violate human rights.1 Google and Facebook are both members of the Global Network Initiative (GNI), a multi-stakeholder organization that works with information and communications technology companies to protect users’ rights when they receive government demands to delete content, restrict access to service, and provide access to user information that violate international human rights standards for freedom of expression and privacy.2

Yet free speech clearly has a dark side: misinformation can be deadly. Given this contradiction, there is little wonder that these three companies have found themselves at the vortex of heated debates about online speech and democracy.

International human rights standards, grounded in the Universal Declaration of Human Rights (UDHR) as well as the U.S.-ratified International Covenant of Civil and Political Rights, offer a framework for companies to protect and respect the rights of users and communities in a manner that addresses key gaps in U.S. law. The U.S. Constitution is designed primarily to protect people from government abuse: The First and Fourth Amendments forbid government censorship and unlawful search and seizure. But when it comes to how companies’ own business decisions affect individuals and society, U.S. law has largely left them off the hook. The law allows commercial social media platforms to moderate and take down content according to their own self-determined rules.3 In essence, U.S. platforms have the right to restrict free expression and shape users’ access to information on their platforms without public accountability: The companies can formulate their private rules through an opaque process, change them frequently, and present them in a way that is hard for many users to understand let alone abide by.

U.S. law has largely left companies off the hook when it comes to how their business decisions affect individuals and society.

Congress has over the past century passed many laws that forbid a vast range of abusive, exploitative, or discriminatory corporate behavior. But the question of how to regulate social media has been both difficult and contentious, given the technological, political, and constitutional complexities around anything having to do with speech—or framed as such. In our first report in this two-part series, we argued that changing the law to hold companies liable for content shared by users is not the answer, and due to technical realities that content moderation ultimately clashes with freedom of expression. The problems of private content moderation are compounded by amplification and targeting systems that shape the flow of information to users based on their personal traits or political beliefs. Yet at the same time, the three social media giants have failed to fully respect and protect users’ expression and information rights as articulated by international human rights standards. That failure has implications for other rights, including the rights to privacy, nondiscrimination, assembly and association, and economic, social, and cultural rights.

Buttressed by emerging accountability frameworks driven by institutional investors and civil society advocates who are pushing governments to adopt them, international human rights standards apply to companies as well as governments. They offer a corporate accountability toolbox that can be used by policymakers, institutional investors, and other affected stakeholders in any country. Yet, this toolbox has so far been largely overlooked by policymakers seeking to hold companies accountable for their social impact in the United States.

The international human rights toolbox has so far been largely overlooked by U.S. policymakers seeking to hold companies accountable for their social impact.

For social media platforms, the fundamental rights to free expression (UDHR Article 19) and privacy (UDHR Article 12) must be protected and respected so that people can use technology effectively to exercise and defend other political, religious, economic, and social rights. The pandemic further underscores how violations of free expression and information rights can cause or contribute to the violation of other rights, such as right to life, liberty, and security of person (UDHR Article 3). Similarly, violation of the right to privacy can also set off a chain reaction for the violation of other rights, including the human right to non-discrimination (UDHR Article 7, Article 23); freedom of thought (UDHR Article 18); freedom of association (UDHR Article 20); and the right to take part in the government of one’s country, directly or through freely chosen representatives (UDHR Article 21).4

The UN Guiding Principles on Business and Human Rights, approved in 2011 and co-sponsored by the U.S. government, has become the gold standard of “guidelines for States and companies to prevent, address and remedy human rights abuses committed in business operations.”5 For the tech sector, applying the principles means respecting users’ privacy and freedom of expression, and all other human rights that their business operations may have an impact on—both online and offline. Respecting users’ freedom of expression does not preclude a private company establishing rules and moderation processes, or even editorial guidelines related to the purpose and scope of the service (for example, if it is meant to serve a specific community or purpose, such as a platform for members of a given profession). But to be effective, human rights standards must be “implemented transparently and consistently with meaningful user and civil society input,” and must be accompanied by industry-wide oversight and accountability mechanisms.6

Ranking Digital Rights (RDR) offers a dynamic and regularly-updated instruction manual for what steps companies can take today—even in absence of clear regulation—to improve their respect for human rights. Since 2015, the RDR Index has evaluated the world’s most powerful digital platforms and telecommunications companies according to their disclosed commitments to respect users’ human rights.7 Grounded in the UDHR and the UN Guiding Principles, the RDR Index methodology comprises more than three dozen indicators in three categories: governance, freedom of expression, and privacy. For 2020 we have upgraded the RDR Index methodology, placing greater focus on rights like non-discrimination and the right to life, liberty, and security of person.8

The Impact of Targeted Advertising and Algorithmic Systems on Human Rights

In our previous report, we described the growing use of algorithms to moderate content even before the coronavirus outbreak, and how these algorithms make frequent errors that lead to the deletion of journalism, activism, and other speech that does not actually violate the platform rules. We also pointed to data from the past four iterations of the RDR Corporate Accountability Index. Only since 2018 have companies started to disclose any data about the volume and nature of content being removed for violating their rules, known variously as terms of service or community guidelines.9 But their disclosures about content moderation continue to fall very short of what free expression advocates and academic researchers believe is the minimum baseline standard for transparency and accountability as private arbiters of global online speech.10

In late 2019 and early 2020, our research team examined how transparent Facebook, Twitter, and YouTube are about how their automated algorithmic systems determine what users can post and share, what gets removed or blocked for violating the rules, and what information is shown to them most prominently through news feeds and recommendations. As we described in our last report, while Facebook, Google, and Twitter do not hide the fact that they use algorithms to shape content, they disclose little about how the algorithms actually work, what factors influence them, and how users can customize their own experiences.11

Pressure for change is growing with the public health crisis. Concerned that this opacity about what happens to content on platforms could be especially dangerous amidst heated public debates about how best to fight the pandemic without destroying peoples’ livelihoods, 75 organizations and researchers released an open letter calling on platforms to preserve information “about what their systems are automatically blocking and taking down.” Without such information, according to the nonprofit Center for Democracy and Technology which helped draft the letter, “it will be hard to assess the efficacy of efforts to share vital public health information while combating the spread of coronavirus scams and pandemic profiteering.”12

To ensure that social media platforms do not contribute to or enable the violation of users’ rights, laws and regulations must strictly limit how users can be targeted.

Yet greater transparency alone will not address fundamental problems related to the targeted advertising business model. Companies have experimented with controlling, or being more transparent about, advertisements placed by or in support of political candidates in elections. Facebook now publishes a library of political ads—with a serious flaw in that its political-ad archive depends on political advertisers complying with labeling requirements, and the platforms detecting those that do not. “Currently, dishonest companies can spend an unlimited amount of undeclared money in favor of a political agenda through the Facebook ads platform,” argue the authors of one recent academic paper. Nor does the Facebook Ad Library disclose information about how these advertisers may have deployed targeted advertising tools, or what types of audience characteristics were targeted.13

Algorithmic targeting systems only work because the companies that profit from them have access to unfathomable amounts of user data. Google (which owns YouTube), Facebook, Twitter, and other companies whose business models rely on targeted advertising have every incentive to hoover up every crumb of data they can access, and to create more opportunities to surveil our daily behavior. Thanks to complex automated processes that frequently aren’t fully understood even by their own creators, digital platforms can identify the individuals who are most likely to make a purchase, be convinced by a political message, or be susceptible to various types of misinformation. Such manipulation is a form of discrimination, in addition to being a clear violation of freedom of opinion and of information, particularly in the context of elections.14

In our first report in this series, we outlined the dangers to democracy when targeted advertising is manipulated for political gain during elections. Election experts have raised concerns that the same technologies and tactics will also be used to spread disinformation about the voting process.15 Both user-declared and algorithmically-deduced political leanings can be exploited to target voters with paid, deliberate disinformation about polling places and times or equally damaging but unverifiable claims about an opponent’s character or the effects of their policy proposals. This practice, which relies on the processing of vast amounts of user information without explicit consent, violates the right to non-discrimination by determining what information a user sees based on disclosed or assumed protected traits, such as race, ethnicity, age, gender identity and expression, sexual orientation, health, disability, etc.16

But the violation of individual users’ rights is not the only harm done. Company practices, incentivized by targeted advertising business models, can also contribute to the violation of the rights of entire communities or categories of people. For example, catering to advertisers’ desire to reach potential job applicants who are demographically similar to their current workforce leads digital platforms to enable their advertiser clients to illegally target job ads by gender,17 race,18 ethnicity,19 and other protected attributes.20

The human rights risks associated with targeted advertising are clear. To ensure that social media platforms do not contribute to or enable the violation of users’ rights, laws and regulations must strictly limit how users can be targeted. They must also require much greater transparency about the nature and design of platforms’ algorithmic systems, processes, and business models.

Citations
  1. Crocker, Andrew et al. 2019. Who Has Your Back? Censorship Edition 2019. Electronic Frontier Foundation. source (May 16, 2020).
  2. Global Network Initiative. 2020. “Global Network Initiative.” Global Network Initiative. source (May 16, 2020).
  3. Specifically by Section 230 of the 1996 Communications Decency Act. For more, see Maréchal, Nathalie, and Ellery Roberts Biddle. 2020. It’s Not Just the Content, It’s the Business Model: Democracy’s Online Speech Challenge – A Report from Ranking Digital Rights. Washington, D.C.: New America. source (May 7, 2020).
  4. Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: New America. source
  5. OHCHR. 2011. Guiding Principles on Business and Human Rights: Implementing the United Nations “Respect, Protect and Remedy” Framework. Geneva: United Nations Office of the High Commissioner on Human Rights. source
  6. U.N. Human Rights Council. 2018. Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression. A/HRC/38/35, 14. source Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports.
  7. Ranking Digital Rights. 2019. Corporate Accountability Index. Washington, D.C.: New America. source
  8. Ranking Digital Rights. 2020. 2020 Ranking Digital Rights Corporate Accountability Index Draft Indicators. Washington, DC: New America. source
  9. Frenkel, Sheera. 2018. “Facebook Says It Deleted 865 Million Posts, Mostly Spam.” The New York Times. source (May 16, 2020).
  10. Singh, Spandana. 2019. Assessing YouTube, Facebook and Twitter’s Content Takedown Policies. Washington, D.C.: New America. source (May 16, 2020).
  11. Ranking Digital Rights. 2020. The RDR Corporate Accountability Index: Transparency and Accountability Standards for Targeted Advertising and Algorithmic Systems — Pilot Study and Lessons Learned. Washington, D.C.: New America. www.rankingdigitalrights/pilot-report-2020
  12. Llansó, Emma J. 2020. “Understanding Automation and the Coronavirus Infodemic: What Data Is Missing?” Center for Democracy and Technology. source (May 16, 2020).Radsch, Courtney J. 2020. “CPJ, Partners Call on Social Media and Content Sharing Platforms to Preserve Data.” source (May 16, 2020).
  13. Silva, Márcio et al. 2020. “Facebook Ads Monitor: An Independent Auditing System for Political Ads on Facebook.” In Proceedings of The Web Conference 2020, Taipei, Taiwan: ACM, 224–34. source (May 16, 2020).
  14. Information collected for targeted advertising purposes enables companies and advertisers to segment audiences in a very granular manner, tailoring messages to very specific attributes including preferences, habits, or traits. As this data is shared across the targeted advertising ecosystem, this in turn enables discrimination against internet users on the basis of protected traits and even the targeting of specific individuals. For more on the relationship between targeted advertising business models and discrimination, see Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: New America. source
  15. Hasen, Richard L. 2020. “What Happens in November If One Side Doesn’t Accept the Election Results?” Slate. source (May 16, 2020).
  16. U.N. General Assembly. (1948). Universal Declaration of Human Rights (217 [III] A), Article 2. Paris. source
  17. Tobin, Ariana, and Jeremy B. Merrill. 2018. “Facebook Is Letting Job Advertisers Target Only Men.” ProPublica. source (May 11, 2020).
  18. Angwin, Julia, and Terry Parris Jr. 2016. “Facebook Lets Advertisers Exclude Users by Race.” ProPublica. source (May 11, 2020).
  19. Tobin, Ariana. 2018. “Facebook Promises to Bar Advertisers From Targeting Ads by Race Or….” ProPublica. source (May 11, 2020).
  20. Ranking Digital Rights. 2019. Human Rights Risk Scenarios: Targeted Advertising, Consultation Draft. Washington, D.C.: New America. source
Human Rights: Our Best Toolbox for Platform Accountability

Table of Contents

Close