Reddit

Reddit is a popular social media platform that dubs itself “the front page of the internet.” Although the platform has a small user base (approximately 330 million monthly active users)1 compared to platforms such as Facebook and YouTube, it is recognized as one of the online services on which viral content is frequently created and spread.2 A significant amount of the conversation around the spread of misinformation and disinformation during the 2016 U.S. presidential election focused on larger internet platforms. However, research indicates that Reddit also played an instrumental role in this false information ecosystem.3 The platform has not been a focal point of ongoing policy conversations, potentially because only approximately 4 percent of Americans use Reddit, and the majority of its users are U.S.-based.4

Despite this lack of attention, Reddit’s unique format fosters an environment in which misinformation and disinformation related to voter suppression can easily spread. Reddit relies on a decentralized model of content moderation, in which the majority of content policy development and subsequent content moderation is carried out by volunteer user moderators, colloquially referred to as “Mods,” who are responsible for specific subreddits. Reddit has high-level content guidelines for the service, which are enforced by a team of employee moderators, known as Admins.5 This approach to content moderation has allowed for niche, localized communities and norms to prosper on the platform. However, these individualized approaches have also enabled misinformation and disinformation to spread among subreddits.6 As a result, some of the most well-known conspiracy theories and misinformation-laden stories, such as the 2016 Pizzagate conspiracy7 and the QAnon conspiracy theory,8 have gone viral on the platform. Reddit is also home to misleading election-related information which seeks to suppress voting. Reddit, however, does not have specific policies that cover voter suppression content, although the company says that its existing policies, such as those on impersonation, could cover such content.9 These policies, introduced in January 2020, prohibit the impersonation of an individual or entity “in a misleading or deceptive manner.” The policy applies to instances including when a Reddit account is being used to impersonate someone, when a domain is being used to mimic others, and when deepfakes or other manipulated media are used to mislead users or are misleadingly attributed to an entity or person. Deepfake technology allows for the creation of falsified and manipulated content that could be used to spread misinformation by making it appear as if an individual is doing or saying something they did not actually do.10 Reddit’s policy creates exceptions for parody and satire, and the company says it will take context into consideration when applying the policy.11

According to Reddit, the company introduced its policy on impersonation to protect against elements that the platform had not yet seen numerous instances of, but could in the future.12 According to the company’s latest transparency report, impersonation accounted for 0.6 percent of content removed and 1.4 percent of accounts removed or suspended by Admins for content policy violations.13 Reddit says it could also use these policies in certain instances to clamp down on misinformation campaigns. In 2018, the company said it identified 944 “suspicious accounts” it associated with the Internet Research Agency (IRA),14 a Russian-backed professional troll-farm and online influence operations company that has carried out campaigns to support Russian business and political aims.15 Expert analysis after the 2016 U.S. presidential election found that the IRA was responsible for numerous voter suppression campaigns on social media platforms, including Reddit, which targeted Hillary Clinton voters, particularly voters of color.16

Although Reddit utilizes a more decentralized approach to content moderation, the company takes a more active role in moderating advertising on the platform. This includes steps to address the potential spread of misleading content related to voter suppression and elections in its advertising. The company bans “deceptive, untrue, or misleading advertising” on the platform, including in political ads. Additionally, Reddit manually reviews and approves the messaging and creative content of each ad that is run on the platform. Reddit’s political ads policies apply generally to ads that relate to campaigns or elections, solicit political donations, encourage voting or voter registration, and issue or advocacy ads that relate to topics of legislative or political importance, among other things. The company only permits ads from candidates and advertisers who are inside the United States, and who are running ads at the federal level. The company also explicitly states that discouraging voting or voter registration through its advertising services is prohibited.17 Further, Reddit says that all political ads must feature “paid for by” disclosures within the ads themselves, must be in alignment with all relevant laws and regulations, and must align with Reddit’s content policies.18

In order to provide transparency around its political ads operations and enforcement mechanisms, Reddit launched a subreddit dedicated to political ads that the company itself moderates. The subreddit includes data on all political ad campaigns that ran on the platform after January 1, 2019, as well as data on individual advertisers, their targeting selections, the impressions ads receive, and instances in which Reddit mistakenly approves ads.19 This data on advertisements, as well as Reddit’s errors during the enforcement process, is valuable for understanding how the company enforces its policies and how these practices shape the political ads landscape on the platform, as well as the misleading information ecosystem within it. Going forward, the platform should share further granular engagement data, such as the number of upvotes, downvotes, and comments political ads receive.20Reddit has responded to misinformation and disinformation on its platform by introducing “misinformation” as a category that Mods can choose to flag posts and comments under. This reporting flow surfaces this content to Admins.21 According to the platform, misinformation can be understood as “malicious and coordinated attempts to spread false information,” as well as users inadvertently spreading false information.22 In the context of COVID-19 related misinformation and disinformation, Reddit says that unless a subreddit is specifically dedicated to spreading misleading information, the company will always aim to educate and cooperate with subreddits to address these forms of content, and will only use enforcement actions such as banning subreddits or “quarantining” if these cooperative efforts fail. When a community is quarantined, it does not appear in search results. Additionally, if a user tries to visit the quarantined community, they will be notified that the subreddit may contain misleading content, and they must explicitly opt in to viewing the content.23 However, it is unclear whether the same policies apply in the context of other categories of misinformation and disinformation.24Along these lines, the company is also monitoring for content manipulation efforts, particularly ahead of the 2020 U.S. presidential election. One of the most common avenues for content manipulation on the platform takes advantage of the content voting system. On Reddit, users can rate each piece of content by “upvoting” or “downvoting” it. Reddit’s algorithms use these votes to assign each post a score and rank them in the news feed.25 In this way, some of the content on the platform is community-curated and trends often emerge as a result of this democratic process. However, as experts have outlined, this system can be gamed by users who create several accounts to downvote or upvote a post, by coordinated attacks on certain forms of content,26 and through the use of existing features on Reddit such as “gilding” which is similar to a “super-vote” and is a mechanism that is generally available to users who have a Gold Reddit subscription or who purchase Reddit coins.27 In alignment with the company’s desires to better understand and track content manipulation efforts on the platform, the company shared some public information about the kinds of coordinated influence campaigns they have detected, such as one led by a Russian-connected group known as Secondary Infektion on the r/redditsecurity subreddit.28 This is a valuable form of transparency that enables users to comment and ask questions about how these types of content are spreading on the platform and what the company is doing to address these issues. Further, the company began issuing a security report in Q4 of 2019 that focuses on efforts to keep the platform and user accounts safe, and includes content manipulation related data such as the number of reports, Admin removals, Admin account sanctions, and Admin subreddit sanctions. Prior to introducing its new hate speech policy, the company also expanded the data in the June 2020 report to include figures related to Admin account sanctions and Admin subreddit sanctions for abuse.29 In addition, the company outlined in its security report that it is working on detecting bots on the platform and providing clear guidance around the use of bots. These policies will aim to address the use of malicious bots that can spread spam and abusive content at scale, manipulate the amplification of content on the service by gaming the voting system, and more. These policies will not look to prohibit bots such as those used by Mods for content moderation purposes.30Although Reddit has a relatively comprehensive set of policies and transparency practices to address the spread of potential voter suppression misinformation and disinformation on its platform, the company can do more. In particular, given that the platform was an active hotspot for the spread of voter suppression content during the 2016 elections, the company should expand its content policies to explicitly address the voter suppression content, as it has done in its ads policy. Reddit’s existing policies, including its impersonation policy, and its new hate speech policy, could address these issues. However, the lack of one central policy that lays out the company’s stance and practices around this form of content could result in gaps and prove extremely problematic ahead of the 2020 elections. To this end, the company should also provide greater transparency around how much content has been removed under a new centralized policy through its transparency report. The company should also alert users who have come into contact with election-related misinformation and disinformation campaigns, particularly content that aims to suppress voting, and the company should clarify what the legitimate parameters around voting and voter registration are. Finally, given that content moderation efforts do not always yield entirely accurate results, the company should notify affected users with information related to the policies they violated, and provide them with the opportunity to appeal the decision.

Citations
  1. Lauren Feiner, "Reddit Users Are the Least Valuable of Any Social Network," CNBC, August 14, 2019, source.
  2. Laura Hautala, "Reddit Was a Misinformation Hotspot in 2016 Election, Study Says," CNET, December 19, 2017, source.
  3. Hautala, "Reddit Was a Misinformation".
  4. Hautala, "Reddit Was a Misinformation".
  5. Singh, Everything in Moderation.
  6. Hautala, "Reddit Was a Misinformation".
  7. BBC Trending, "The Saga of 'Pizzagate': The Fake Story That Shows How Conspiracy Theories Spread," BBC, December 2, 2016, source.
  8. Adrienne LaFrance, "The Prophecies of Q," The Atlantic, June 2020, source.
  9. Newman, "Tech Platforms".
  10. Lisa Kaplan, "How Campaigns Can Protect Themselves From Deepfakes, Disinformation, and Social Media Manipulation," Brookings Institution, last modified January 10, 2019, source.
  11. U/LastBluejay, "Updates to Our Policy Around Impersonation," /r/redditsecurity, last modified January 9, 2020, source
  12. U/LastBluejay, "Updates to Our Policy," /r/redditsecurity.
  13. Reddit, Transparency Report 2019, source.
  14. Lucas Matney, "Reddit Has Banned 944 Accounts Linked to the IRUA Russian Troll Farm," TechCrunch, April 18, 2018, source.
  15. Office of the Director of National Intelligence, Background to "Assessing Russian Activities and Intentions in Recent US Elections": The Analytic Process and Cyber Incident Attribution, January 6, 2017, source.
  16. Young Mie Kim, "New Evidence Shows How Russia's Election Interference Has Gotten More Brazen," Brennan Center for Justice, last modified March 5, 2020, source.
  17. U/con_commenter, "Changes to Reddit's Political Ads Policy," r/announcements, last modified April 13, 2020, source.
  18. U/con_commenter, "Changes to Reddit's," r/announcements.
  19. Spandana Singh, "Reddit's Intriguing Approach to Political Advertising Transparency," Slate, May 1, 2020, source.
  20. Singh, "Reddit's Intriguing".
  21. U/worstnerd, "Misinformation and COVID-19: What Reddit is Doing," r/ModSupport, last modified April 15, 2020, source.
  22. U/worstnerd, "Misinformation and COVID-19," r/ModSupport.
  23. U/worstnerd, "Misinformation and COVID-19," r/ModSupport.
  24. U/worstnerd, "Misinformation and COVID-19," r/ModSupport.
  25. Spandana Singh, Rising Through the Ranks How Algorithms Rank and Curate Content in Search Results and on News Feeds, October 21, 2019, source.
  26. Candice Wang, "Revisiting Reddit's Attempt to Stop 'Secondary Infecktion' Misinformation Campaign from Russia," The Free Internet Project, last modified July 20, 2020, source.
  27. Wang, "Revisiting Reddit's," The Free Internet Project.
  28. U/worstnerd, "Additional Insight into Secondary Infektion on Reddit," r/redditsecurity, last modified April 8, 2020, source.
  29. U/worstnerd, "Reddit Security Report – June 18, 2020," r/redditsecurity, last modified June 18, 2020, source.
  30. U/worstnerd, "Reddit Security," r/redditsecurity.

Table of Contents

Close