Facebook Releases Information on Algorithmic Content Ranking, but More Transparency Is Needed

Blog Post
PixieMe/shutterstock.com
Oct. 19, 2021

Last month, Facebook released the first public version of its Content Distribution Guidelines (CDGs), which outline some of the types of content the platform demotes in its News Feed. Notably, this is the first time an internet platform has released a comprehensive set of policies related to its algorithmic ranking and recommendation of content. However, this transparency effort is limited because the platform still does not provide adequate insight into how its News Feed curation algorithms work, and if and how its efforts to demote harmful content have impacted the distribution of such content on the service. Facebook must provide these additional disclosures in order to provide more meaningful transparency and accountability around its News Feed ranking mechanisms.

For many years, the conversation around content moderation and curation on platforms centered on companies removing harmful content from their services. Since 2016, Facebook has broadened this approach and deployed what it calls the “remove, reduce, and inform” strategy. Under this approach, the company states it removes content that violates its Community Standards, reduces the distribution of problematic content that does not violate the Community Standards, and informs users with supplemental information so they can make educated decisions on what content to click on and consume. According to Facebook, it’s “reduce” efforts primarily apply to content in the News Feed, aiming to create a better and safer user experience and encourage publishers to create high quality content.

The CDGs, at a high level, outline the policies and rationale that Facebook deploys when reducing the distribution of some forms of problematic and low-quality content on the News Feed. The CDGs cover a broad range of content, including clickbait, engagement bait, sensationalist health content and commercial health posts, fact-checked misinformation, and unsafe reporting about suicide. OTI and other civil society groups have long pressed companies for more transparency surrounding their content curation algorithms. Facebook’s release of its CDGs could therefore put pressure on other platforms to release similar policy information, marking a positive change.

However, while the CDGs are a good first step towards providing transparency around algorithmic ranking on the Facebook News Feed, they are limited in their effectiveness by the fact that the platform has yet to provide adequate insight and transparency into key elements of the overall algorithmic ranking ecosystem. Going forward, Facebook should especially seek to provide transparency around three key factors:

First, Facebook must clearly define terms such as “demote” and “amplify” which are often used when discussing the platform’s “reduce” approach to tackling harmful content. According to Facebook, the company uses algorithms to sort through the inventory of all potential pieces of content that could appear on a user’s News Feed, assign a relevancy score to each piece of content, and determine the order in which content in the inventory is presented in the News Feed. When the platform demotes or down ranks a piece of content, this content is ranked lower in the overall presentation of content on the News Feed, but it is not removed or hidden. Therefore, if a user kept scrolling through their News Feed, eventually they would be able to see the content that was demoted, whether it is clickbait or health-related misinformation. Currently, the platform has not provided clear definitions for terms like “demotion.” It is therefore unclear whether a demoted piece of content is sent to the very bottom of the inventory of content, or to somewhere in the middle, where the content could still be easily accessed by a user scrolling through the News Feed for an extended period of time. It is also unclear whether low-quality content like engagement bait is demoted to the same extent as harmful content like misinformation. Additionally, Facebook must also clarify what it means when it says it “promotes” certain types of content, such as information from authoritative sources, or content that is “valuable” in the News Feed. Does this mean that this kind of content appears at the top of a user’s News Feed, in the top 25 posts, or something else? These definitions are critical to understanding how Facebook ranks positive and harmful content in its News Feed and what impact these approaches have on content consumption on the platform. Such explanations from platforms could also help lawmakers around the world who are considering regulating internet platforms’ content curation algorithms develop clear guidance.

Second, while the CDGs outline some of the categories of content that Facebook demotes, they do not outline the categories of content that the News Feed prioritizes. As whistleblower Frances Hougan outlined last week, in 2018, Facebook changed the configuration of its News Feed algorithms to prioritize “engaging” content. This change has resulted in some sensationalist and harmful content appearing higher in and being consumed more in the News Feed. According to Facebook, its ranking algorithms consider hundreds of thousands of signals, including who posted a piece of content, when it was posted, and how fast a user’s internet connection is, to determine how content is ranked in the News Feed. While it may not be feasible—or even useful—for the company to disclose a full list of the signals it considers when ranking content in the News Feed, Facebook should provide transparency around which of these signals have the most influence over how content is presented to users on the News Feed. Additionally, the company should explain how the signals it uses to rank content in the News Feed impact where a demoted piece of content appears in the News Feed. For example, if a piece of misinformation is debunked, but it is highly engaging, will it still rank as low in the overall presentation of content, or will it rank higher?

Ranking and recommendation algorithms can have a significant impact on the content users see and engage with, and therefore how they see the world. In August, Facebook released its first Widely Viewed Content Report, which outlines the top 20 public domains, links, pages and posts users in the U.S. viewed in the United States during the beginning of 2021. This was a helpful first move towards transparency, but the report does not shed light on why users were seeing this content, what values the company is prioritizing when making content recommendations to users, and how these decisions affect user behaviors and opinions. Facebook needs to do more in this regard.

Lastly, Facebook must provide more transparency around the impact of its content demotion efforts. The platform regularly touts the “reduce” approach when discussing how it combats the spread of COVID-19 misinformation and election disinformation. However, the company has provided very little data to demonstrate that its efforts to reduce the distribution of such content on the News Feed have actually succeeded in preventing the consumption of this content and in decreasing the harmful effects of such content on their services. This data is critical for demonstrating accountability and for justifying the use of the “reduce” approach.

Related Topics
Algorithmic Decision-Making Content Moderation Platform Accountability