Meta’s Mandate-Free Governance: How to Think about Facebook and the Oversight Board

Blog Post
March 24, 2023

Last month, the U.S. Supreme Court heard oral arguments in two cases that could upend the internet as we know it. Both center on instances of online radicalization and whether tech companies should be held liable for content published on their platforms that precipitates real-world harm. At issue is whether Section 230 of the 1996 Communications Decency Act shields firms like Google, Meta, and Twitter from liability for user-generated content that appears on their sites.

The core question is one societies worldwide are grappling with: Who is responsible for regulating online content? Legal experts are highly doubtful the Supreme Court will rule in a way that meaningfully alters Section 230 and imposes stricter requirements on firms. And unlike in the European Union, where the Digital Services Act has created the most robust and far-reaching set of enforceable legal obligations on large platform companies in any jurisdiction to date, U.S. lawmakers are apt to continue allowing tech companies to police themselves when it comes to content moderation.

But in practice, these companies are governing not just themselves. In setting up processes and bodies to moderate content, they are exercising governance over publics all over the world without explicitly delegated authority from national sovereigns to do so. This phenomenon of what I call private polity-making is an unprecedented type of quasi-democratic governance, one best embodied by the Facebook Oversight Board.

In September 2019, facing ire from shareholders and politicians over content moderation decisions on its platform, Facebook (since renamed Meta) unveiled the charter of a Content Oversight Board. The Oversight Board’s stated purpose was “to protect free expression by making principled, independent decisions about important pieces of content and by issuing policy advisory opinions on Meta’s content policies.” The Board launched in October 2020 with hopes to develop private case law, especially in hate speech, that “real courts would eventually cite.” Board members included a former Danish prime minister, human rights lawyers, journalists, and a Nobel laureate. But membership notably did not include computer scientists or software engineers, the type of experts who might be best positioned to weigh in on technical aspects of content moderation such as algorithmic design.

The Board’s greatest test for managing digital harms came when it was just months old. On January 7, 2021, Facebook banned President Donald Trump after an insurrectionist mob stormed the U.S. Congress in Trump’s name. On January 21, after President Biden took office, Facebook referred Trump’s account suspension to the Board (per the bylaws, Trump could not appeal the suspension himself).

In May, the Board upheld the suspension but admonished Facebook for not specifying its length: “Facebook cannot make up the rules as it goes, and anyone concerned about its power should be concerned about allowing this.” Legal observers wondered “whether Trump’s deplatforming represents the start of a new era in how companies police their platforms, or whether it will be merely an aberration.”

The decision also raised a more profound question: As an experiment in democratic digital governance, how should we think about the Facebook Oversight Board?

On the surface, the Oversight Board is a classic case of a private global governor, one “engaged in authoritative decision-making that was previously the prerogative of sovereign states.” But the Board does not just exercise authority over Facebook’s activities, playing a role that would typically be the purview of a government regulator. No user votes were cast and no explicit public mandate or authority was given to the Oversight Board to decide the regulation of content.

In fact, on closer examination, the Oversight Board is emblematic of Facebook’s efforts to expand the scope of private governance. This flavor of private governance is unlike traditional corporate self-regulation that governs corporate conduct through voluntary codes of conduct – although Facebook participates in those too. It is also different than delegated private governance such as in war contracting when states designate defense or security firms to fulfill tasks and roles that might normally be performed by a conventional military force. Instead, Facebook generally and the Board specifically is emblematic of private polity-making where corporations govern other publics without explicitly delegated authority.

In this sense, through the Oversight Board, Facebook can act as a gatekeeper typical in a democratic system. Except instead of elected officials who act with the consent of the governed, the gatekeeper here is an unelected corporation governing without legitimate authority. Founder and CEO Mark Zuckerberg has said he does not intend for the Board to become the arbiter of truth: “I don’t think most people want to live in a world where you can only post things that tech companies judge to be 100% true.” Yet, Facebook’s private polity-making allows Zuckerberg to admit that no one gave the company authority to define acceptable speech for billions while at the same time persist in policing the speech of billions.

Facebook has had a complicated relationship with responsibility. When reports of fake news on Facebook influencing the 2016 U.S. election first came out, Zuckerberg called it a “pretty crazy idea.” It was later revealed that the profiles of at least 87 million Americans were compromised from a third-party application made by a voter-profiling firm, Cambridge Analytica, employed by the Trump campaign. After the scandal broke in March 2018, Zuckerberg went on a public apology tour around the American states, before testifying to Congress that “We didn't take a broad enough view of our responsibility, and that was a big mistake. … Across the board, we have a responsibility to not just build tools, but to make sure that they're used for good.”

Zuckerberg’s mea culpa was a sign of Facebook’s gradual and reluctant acceptance of its polity-making role. In creating the Oversight Board, Facebook’s leadership recognized the importance of courting public legitimacy: “At the end of the day you can build all the things, but you just have to have enough people that believe in order to make it real,” said Brent Harris, the consultant who led the effort to create the Board. The Board’s global scope appears vast; it supports 18 languages, three times the UN’s six. Yet, its mandate to counter algorithmic harms is narrow. For the first seven months, users could only appeal content takedowns, not content that remained on the site—making it difficult to combat misinformation. Users still cannot challenge issues related to advertising or algorithms. Moreover, the Board only reviews a tiny fraction of the approximately 200,000 posts eligible for appeal daily from automated and human moderation, issuing 35 case decisions and two policy advisory decisions thus far. The board does publish a transparency report and has made changes to its original charter and bylaws, for instance announcing plans to review more cases.

Facebook’s Oversight Board appears to be better prepared than its counterparts to manage private governance. In October 2022, Twitter’s CEO Elon Musk proposed a Content Moderation Council, somewhat modeled after the Oversight Board, with Musk tweeting that “no major content decisions or account reinstatements will happen before the Council convenes.” Yet two weeks later, Musk overrode the Council when he reinstated Trump’s account following a user poll. Musk’s action marked a stark difference between Twitter and Facebook in managing digital harms.

Emerging regulations such as the European Union’s Digital Services Act (DSA) will further strain tensions between private and public polity-making, specifically between legal requirements for content moderation and platforms’ terms of service. While Facebook’s Oversight Board has upheld international human rights laws in its decisions, it is not a legal body, and it does not serve as a solution in the absence of internet platform governance. As alternate public means of governance such as the EU’s Digital Services Act are implemented, conflicts are sure to emerge. Bodies such as the Facebook Oversight Board are innovative attempts to govern what is still an unruly space, but they may be a poor substitute for true democratic governance.

Swati Srivastava is Assistant Professor of Political Science at Purdue University, where she researches public/private relations in global governance, including the political power and responsibility of Big Tech. Her book Hybrid Sovereignty in World Politics was recently published by Cambridge University Press.