Introduction

The basic premise of the digital media economy is no secret. Consumers do not pay money for services. They pay in data—personal data that can be tracked, collected, and monetized by selling advertisers access to aggregated swathes of users who are targeted according to their demographic or behavioral characteristics.1 It is personalized advertising dressed up as a tailored media service powered by the extraction and monetization of personal data.

This “tracking-and-targeting” data economy that trades personal privacy for services has long been criticized as exploitative.2 But the bargain of the zero price proposition has always appeared to outweigh consumer distaste—and even public outrage—for the privacy implications of the business. That finally may be changing.

Public sentiment has shifted from concern over commercial data privacy—a world where third parties exploit consumer preferences—to what we might call “political data privacy,” where third parties exploit ideological biases. The marketplace for targeting online political communications is not new. But the emergence of highly effective malicious actors and the apparent scale of their success in manipulating the American polity has triggered a crisis in confidence in the digital economy because of the threat posed to the integrity of our political system.3 The specter of “fake news” and digital disinformation haunts our democracy. The public reaction to it may well produce a political momentum for regulating technology markets that has never before found traction.4

It is personalized advertising dressed up as a tailored media service powered by the extraction and monetization of personal data.

Since the 2016 presidential election in the United States, there has been a steady drumbeat of revelations about the ways in which the digital media marketplace—and its data driven business model—is compromising the integrity of liberal democracies.5 The investigations into the prevalence of “fake news” pulled the curtain back on Russian information operations,6 Cambridge Analytica’s privacy-abusing data analytics services,7 bot and troll armies for hire,8 echo-chambers of extremist content,9 and the gradual public realization that the economic logic of digital media feeds these cancers. The spread of this disease is global and shows no sign of abating any time soon. And it remains unclear whether the industry’s attempts thus far at engineering prophylactic cures will prove at all helpful.10

The central theme in these scandals is the power of the major digital media platforms to track, target, and segment people into audiences that are highly susceptible to manipulation. These companies have all profited enormously from this market structure, and they have done little to mitigate potential harms. Now that those harms appear to threaten the integrity of our political system, there is a crisis mentality and a call for reform.

Will this explosion of awareness and outrage over violations of “political data privacy” result in a new regulatory regime for the data economy? The positive news is that we have already seen some movement in this direction, most of which has been triggered by the immense level of public scrutiny and inquiry over social media’s interaction with the agents of disinformation. In the few months since the Facebook-Cambridge Analytica revelations, we have watched the leading technology firms take up a number of new initiatives that it previously appeared they would never undertake. Among these new steps are, perhaps most notably, Facebook’s introduction of its new political ad transparency regime.11 But these changes have only been instituted because of the public’s clamoring for them. Alone, they will never be enough to stave off the impact of disinformation operations. And if the historic decline in the Facebook and Twitter stock prices in the wake of these reforms proves any trend,12 it only reveals that the priorities of Wall Street will continually reassert themselves with vigor.

Now that those harms appear to threaten the integrity of our political system, there is a crisis mentality and a call for reform.

We believe it is time to establish a new “digital social contract” that codifies digital rights into public law encompassing a set of regulations designed to foster open digital markets while protecting against clear public harms and supporting democratic values. The digital media platforms now dominate our information marketplace, in the process achieving a concentration of wealth and power unprecedented in modern times. As a democratic society, we must now intervene to ensure first order common interests come before monopoly rent-seeking—and to steer the power and promise of technology to benefit the many rather than the few. The digital rights agenda should be architected around three simple principles:

  • Transparency: As citizens, we have the right to know who is trying to influence our political views and how they are doing it. We must have explicit disclosure about the operation of the advertising and content curation processes on dominant digital media platforms, including the social impact of algorithms and the presence of non-human accounts.
  • Privacy: As individuals with the right to personal autonomy, we must be given more control over how our data is collected, used, and monetized, especially when it comes to sensitive information that shapes political decision-making.
  • Competition: As consumers, we must have meaningful options to find, send and receive information over digital media.

This report offers a framing analysis for each of these public service principles and proposes a policy agenda to shape future market development within a rights-based framework. We are focused on a set of policy changes designed to address the specific problem of disinformation. We accomplish this by proposing both practical regulations to address clear harms and structure reform of business practices that worsen the problem over time. We have been greatly encouraged during the research and writing of this essay to see similar conclusions appear in recent reports of thought-leading policymakers.13 In our common project of protecting democracy, the question is less what is to be done and more how to do it. The ideas offered here are intended to identify the first practical steps on a longer path towards shaping the tremendous power of the internet to serve the public interest. The consequences of inaction threaten the integrity of our democracy itself.

Citations
  1. David Streitfeld, Natasha Singer and Steven Erlanger, “How Calls for Privacy May Upend Business for Facebook and Google”, New York Times, March 24, 2018.
  2. Jane Wakefield, “Facebook and Google need ad-free options says Jaron Lanier”, BBC, April 11, 2018.
  3. George Soros, “The Social Media Threat to Society and Security”, Project Syndicate, February 14, 2018.
  4. Taylor Hatmaker, “Facebook’s latest privacy debacle stirs up more regulatory interest from lawmakers” , Tech Crunch, March 18, 2018.
  5. Hunt Allcott and Matthew Gentzkow, “Social Media and Fake News in the 2016 Election”, Journal of Economic Perspectives, 2017 Spring.
  6. Alicia Parlapiano and Jasmine C. Lee, “The Propaganda Tools Used by Russians to Influence the 2016 Election”, New York Times, February 16, 2018.
  7. Carole Cadwalladr, “The Cambridge Analytica Files –– ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower,” The Guardian, March 18, 2018.
  8. Scott Shane, “How Unwitting Americans Encountered Russian Operatives Online,” The New York Times, February 18, 2018.
  9. Jillian D'Onfro, “How YouTube search pushes people toward conspiracy theories and extreme content,” CNBC, March 12, 2018.
  10. James Vincent, “Why AI isn’t going to solve Facebook’s fake news problem,” The Verge, April 5, 2018.
  11. Amy Gesenhues, “Facebook’s new rules for political & issue ads start today”, Marketing Land, May 25, 2018.
  12. See, e.g. Seth Fiegerman, “Facebook and Twitter face uncertain road ahead,” CNN, July 27, 2018.
  13. Esp., U.S. Senator Mark Warner, White Paper, “Potential Policy Proposals for Regulation of Social Media and Technology Firms,” July 30, 2018; and UK House of Commons, Digital, Culture, Media, and Sport Committee, “Disinformation and ‘fake news’: Interim Report,” July 24, 2018.

Table of Contents

Close