Meta's Missteps

A New Database Offers an Unprecedented View into the Company's Reactive, Under-Resourced Moderation Practices
Blog Post
June 16, 2023

The PR headaches keep coming for Meta, the company formerly known as Facebook. Last week, a Kenyan court ordered the company to offer mental health care to a group of Facebook moderators who sued the company for union busting. As part of their suit, the moderators cited poor working conditions, low pay, and routine exposure to disturbing content. The case came on the heels of another lawsuit filed by a Kenyan moderator in 2022, who accused Meta, via its staffing partner, of human trafficking. The suit claimed that Meta’s subcontractor lured employees to Kenya under false pretenses by posting job ads without mentioning that they would be working as Facebook moderators and with graphic material.

Meta’s Kenyan troubles reflect the company’s larger struggle to govern platforms like Facebook, Instagram, Messenger, and Whatsapp that, in aggregate, are used by half the human race. Tools for combating disinformation, hate speech, illegal content, and targeted information operations may work in one country, but fail in another due to cultural nuance or governmental interference, while effective moderation in languages other than English has been a chronic challenge for the company. What’s more, Meta insiders know about these shortcomings, but have struggled to reconcile them against the business imperative to expand and monetize content.

We know this thanks to a 2021 leak of internal documents by Facebook whistleblower Frances Haugen. While working as a product manager, Haugen painstakingly photographed documents on her phone, and then handed the files to the Securities and Exchange Commission in the summer of 2021. The trove of some 20,000 screenshots gave the world an inside look at how the social media sausage is made and moderated at Facebook. However, the format of the files made them difficult to search, and they also contained personally identifying information of Facebook employees. For these reasons, Haugen entrusted a select group of reporters with the task of sifting through the materials while safeguarding against the harms associated with a mass, unredacted release. The result was an explosion of coverage of Facebook’s role in events ranging from the January 6, 2021, Capitol attack to ethnic conflicts in Ethiopia, Myanmar, and the South Caucasus. Days after the first wave of articles came out in October 2021, Facebook announced its rebranding as Meta.

However, not everything in the documents was covered by the media, and many details about how Facebook deals with hate speech, illegal content, and mis/disinformation on its platforms remain unreported. To address this gap, Harvard’s Public Interest Tech Lab is launching a platform called FBarchive later this summer, which will allow the public to explore Haugen’s documents for themselves. Once open to the public, FBarchive will offer researchers, policymakers, and the curious an unprecedented look behind the curtain of one of the world’s largest and most consequential companies.

Shop Talk

Many of the Haugen documents consist of conversations among Meta employees on the company’s internal Workplace platform, a discussion board that closely resembles its public Facebook product. Organized and anonymized within FBarchive, the screenshots show employees discussing company culture and policies with a striking degree of frankness. For example, in reply to a post by then-CTO Mike Schroepfer asking “what’s slowing you down?”, one worker wrote “We perpetually need something to fail - often fucking spectacularly - to drive interest in fixing it, because we reward heroes more than we reward the people who prevent a need for heroism.” The post received nearly 900 “likes” and sparked pages of discussion.

The documents reveal shortcomings in Meta’s detection and moderation tools, especially outside the United States. Some countries are identified as priority risks. One 2021 slideshow points out that “limited localization,” lack of language classifiers other than English, and insufficient “human support” (such as the Kenyan moderators who sued the company) make “most of our integrity systems” much less effective outside the US. The documents highlight countries like Ethiopia and Myanmar as high-risk environments where Meta struggles to detect and counter harmful speech that has fueled violence and harassment in recent years.

The leaks show that, while Meta employs a range of detection tools for sniffing out harmful and coordinated activity on its platforms, these are far from foolproof. For example, a 2018 discussion hypothesizes a potential link between Macedonia’s political spammers and Russian information operations, with one participant remarking that “If these actors don’t collaborate directly via any of our services there is almost nothing we can do to prove these relationships” [sic]. Several documents from 2020 note that the company’s ability to detect misinformation and foreign intelligence operations on Instagram is “still nascent.”

Meta's Internal Critics

Why does Meta keep fumbling the ball when it comes to protecting its users? The leaks offer several reasons. First, as already noted, Meta employees often blame the company’s culture of reacting to crises rather than focusing on prevention. As one engineer writes, “‘Better engineering’ at facebook is making something poorly, then coming back to fix it later” [sic]. The company acts swiftly in response to technological failures or bad press, but does not reward the unglamorous work of anticipating problems before they escalate.

Second, Meta’s short-term business goals clash with what its employees think would best serve the company in the long run, as well as reduce social harm. One of the documents is a nearly 7,000-word exit memo from Sophie Zhang, a Facebook data scientist who left the company in 2020. Zhang claims that she was the main employee in charge of finding and fighting government-backed information operations and that she personally made “decisions that affected national presidents” and “so many prominent politicians globally that I’ve lost count.” Zhang did this work on top of her core duties. When she asked for more support, she was told that the company could not spare the resources. In short, Meta does not invest enough in solving problems—until it’s too late.

A third theme, which overlaps with both of the previous issues, is Meta’s tendency to prioritize problems that can be quantified over those that cannot. Several employees note that this stems from a Silicon Valley-wide obsession with data and measurable change, but in practice means that workers are incentivized to squash bugs rather than prevent them in the first place.

No Quick Fix

Meta’s highly public missteps have driven a steady decline in Facebook’s brand reputation among US adults, yet to some extent the hurdles the company has faced are inherent to operating the world’s largest social platforms. No matter how many resources Meta throws at moderation problems, some abuse of its products is probably inevitable. To Meta’s credit, it has attempted to address past mistakes, creating among other measures an independent Oversight Board to advise on moderation policies. The documents in Harvard’s FBarchive show that many company insiders are well-intentioned people puzzling through some very thorny problems. A Gettr or Gab it is not.

While we can’t expect perfection, we can expect Meta to proactively ameliorate harms on its platforms, but that requires anticipating problems in advance. This, in turn, demands a robust dialog between Meta and its users, regulators, researchers, civil society groups, and other stakeholders who have an interest in the health and safety of our shared digital spaces. Solutions will be as nuanced and international as the problems, and to that end resources like FBarchive that shed light on Meta’s inner workings will be invaluable.