Welcome to New America, redesigned for what’s next.

A special message from New America’s CEO and President on our new look.

Read the Note

Recommendations

As indicated by health experts around the world, the COVID-19 pandemic is likely to last for some time. It is therefore imperative that companies begin thinking about how they can combat the spread of misinformation and disinformation related to the virus while also providing transparency and accountability around their efforts. The recommendations below center on how companies can improve their efforts to connect users to authoritative information, moderate or reduce the spread of misleading content, alter and enforce advertising policies, and provide transparency around their efforts during the pandemic. This section also includes recommendations for how U.S. policymakers can encourage further accountability, and support efforts to combat the spread of misinformation during this time.

Internet Platforms

Connecting Users To and Uplifting Authoritative Information:

In the context of efforts to combat COVID-19 and health-related misinformation and disinformation, platforms should:

  • Partner with reputable fact-checking organizations and authoritative entities such as the WHO, CDC, and public health organizations to verify or refute information circulated through organic content as well as advertisements.
  • Fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of COVID-19-related misinformation and disinformation.
  • Educate users about potential attacks and scams related to COVID-19 that may appear on the platform and on methods to avoid becoming a victim of such nefarious efforts.
  • Institute a public interest exception policy that enables companies to leave content that has been posted by world leaders and elected and other government officials on their services, even if the content has been fact-checked and deemed to contain misinformation. In such instances, the company should append a label to the content that provides additional context, including notice that the content has been fact-checked and contains misleading information. Companies should also direct users viewing such content to authoritative sources of information. However, where companies determine that content posted by such officials could result in imminent harm, they should not apply this public interest policy, and should instead promptly remove the content as they would with any other user.
  • Provide adequate notice to users who have engaged with misleading content related to COVID-19 in the past and direct them to authoritative sources of information.
  • Conduct regular periodic reviews of algorithmic recommendation and ranking tools, and recalibrate them as necessary so they do not direct users to or surface misleading content when they search for COVID-19-related topics.

Moderating and Reducing the Spread of Misleading Information:

Companies that have specific policies related to how COVID-19 or health-related misinformation and disinformation content is moderated or downranked should:

  • Remove or reduce the spread of content that has been fact-checked and deemed to contain misinformation.
  • Publish a detailed description of these policies online including examples of how these policies are enforced. Companies should also provide public notice if these policies change and should include an archive of past policies.
  • Explain to users to what extent content that violates these policies is reviewed and moderated by human reviewers and by automated tools. Users should be notified of any updates to these procedures.
  • Provide adequate notice to users who have had their content removed or who have had their content downranked.
  • Give users the opportunity to appeal moderation decisions. Given that many companies have chosen to increase their reliance on automated tools to detect and remove content at scale during the pandemic, they should enable users to appeal moderation decisions which have resulted in the removal or suspension of their content and accounts. This appeals process should be timely and should enable users to provide additional information on the case and have their case reviewed by someone new. Users who flag content and accounts should also have access to an appeals process. In addition, given the high potential for error when increasingly relying on automated tools, companies should consider not permanently suspending accounts during the pandemic.

Altering and Enforcing Advertising Policies:

Companies that have specific policies related to how COVID-19 or health-related information appears in advertisements should:

  • Publish a detailed outline of their ad content and targeting policies online including examples of how these policies are enforced. Companies should also provide public notice if these policies change and should include an archive of past policies.
  • Explain how the company’s ad content and targeting policies are enforced and whether and how this process is reliant on automated tools and human review.
  • Establish and disclose a comprehensive process to review ads and targeting categories that are related to COVID-19, as they can have significant real-life consequences. Companies’ policies should require them to review ads before they are permitted to run on a platform. The company should disclose whether and how this process is reliant on automated tools and human review.
  • Give advertisers who have their ads flagged or removed for violating COVID-19-specific advertising policies the opportunity to appeal these decisions. Given that companies are increasingly relying on automated tools to review ads during the pandemic, an appeals process is necessary to ensure legitimate advertisers are not undermined.

Providing Transparency Around COVID-19-Related Moderation and Enforcement Efforts:

Companies that have specific moderation and advertising policies related to COVID-19 should:

  • Publish a COVID-19-specific transparency report following the pandemic that outlines the scope and scale of content moderation efforts and efforts to reduce the spread of misinformation during this period. At a minimum, this should include data on:
    • The number of accounts flagged, the number of accounts suspended, and the number of accounts removed
    • The number of pieces of content that were flagged, the number of pieces of content that were removed, the number of pieces of content that were downranked, and the number of pieces of content that were left up but labeled
    • How much of the content that was flagged was identified proactively through automated tools and how much of the content was identified through human flags (from users, Trusted Flaggers, etc.)
    • How much of the content that was removed or downranked was identified proactively through automated tools and how much of the content was identified through human flags (from users, Trusted Flaggers, etc.)
    • A breakdown of content that was removed or downranked by product
    • A breakdown of content that was removed or downranked by format (e.g. video, image, text)
    • A breakdown of content that was removed or downranked by category of misinformation/disinformation (e.g. fake cures, public health, false origin narratives, claims that impact public safety, etc.)
    • The number of appeals received for action taken against content and accounts in this category
    • The number of pieces of content restored and the number of accounts restored as a result of appeals in this category
    • The number of pieces of content restored and the number of accounts restored as a result of proactive recognition of errors by the company
  • Publish a COVID-19-specific transparency report that includes data on the number of listings that the company has removed and the number of sellers the company has banned for violating its COVID-19 specific policies as well as its preexisting commerce policies. This pertains to companies operating a marketplace or e-commerce service.
  • Provide periodic public updates on content moderation, advertising policy enforcement, and commerce policy enforcement efforts during the pandemic. This is particularly important given that the pandemic is likely to be ongoing for some time.
  • Expand their reporting to include information on their efforts to remove or reduce the spread of misleading content in their general transparency reports, if they do not currently publish this information.
  • Create a publicly available online database of all ads in categories related to COVID-19 that a company has run on its platform. This database should include search functionality. In order to protect privacy, the information in this database should not enable the identification of users who received the ad. At a minimum, this database should disclose the following information about each of the ads in the database, including ads that were approved in error:
    • The format of the ad (e.g. text, video, etc.)
    • The name of the advertiser
    • What region the ad was run in
    • How much the spend for the ad was
    • The time period during which an ad was active
    • Granular engagement and interaction information, such as how many users saw the ad, and the number of likes, shares, and views that an ad received
    • What targeting parameters the advertiser selected
    • What categories of users the ad was eventually delivered to (i.e. what targeting parameters did the ad delivery system eventually select and optimize for)
    • Whether the ad was delivered to custom sets of users or ones generated by an automated system
  • Publish a COVID-19-specific transparency report that provides a granular overview of the platform’s advertising policy enforcement procedures. At a minimum, this transparency report should disclose the following information for ads that have been flagged or removed from the platform during the pandemic:
    • The total number of ads flagged for violating the platform’s preexisting advertising content policies and its COVID-19-specific content policies
    • The total number of ads removed for violating the platform’s preexisting advertising content policies and its COVID-19-specific content policies
    • The total number of ads flagged for violating the platform’s preexisting ad targeting policies and any COVID-19-specific targeting policies
    • The total number of ads removed for violating the platform’s preexisting ad targeting policies and any COVID-19-specific targeting policies
    • A separate breakdown of the ads and accounts flagged and removed for violating the platform’s preexisting advertising content policies and COVID-19-specific content policies by:
      • The advertising content policy they violated
      • The format of the ad’s content (e.g. text, audio, image, video, live stream)
      • The country of the advertiser
      • For companies that operate more than one platform, the product or service on which the ad was run
      • The detection method used (e.g. user flag, automated tool). Note that the identity of individual flaggers should not be revealed
    • A separate breakdown of the ads and accounts flagged and removed for violating the platform’s ad targeting policies by:
      • The ad targeting policy they violated
      • The format of the ad’s content (e.g. text, audio, image, video, live stream)
      • The country of the advertiser
      • For companies that operate more than one platform, the product or service on which the ad was run
      • The detection method used (e.g. user flag, automated tool). Note that the identity of individual flaggers should not be revealed

Policymakers

The U.S. government is limited in the extent to which it can direct platforms how to decide what content to permit on their sites. However, in the context of the pandemic, the U.S. government can take certain steps to improve accountability mechanisms from platforms and to support efforts to combat the spread of misinformation.

  • Policymakers should enact rules to require greater transparency from online platforms, including regular reporting regarding their content moderation, ad targeting and delivery, and commerce enforcement efforts.
  • The FTC should enforce Section(5)(a) of the FTC Act, as appropriate, against businesses that engage in unfair and deceptive trade practices during the pandemic, including through online ad campaigns and e-commerce.
  • Government agencies and representatives should ensure that they are disseminating verified information related to the pandemic and are not contributing to the spread of unproven or debunked information.
  • Government public health officials (such as those from the CDC) and relevant agencies (such as the U.S. Department of Health and Human Services and the Federal Emergency Management Agency) should collaborate with internet platforms to provide and promote verified and legitimate information related to the pandemic on their platforms. These entities should also help debunk misleading claims and information using their own online accounts.
  • Given the increase of misinformation-fuelled discrimination, policymakers should clarify that all offline anti-discrimination statutes apply in the digital environment. Congress and state legislatures should also enact appropriate legislation where necessary in order to fill gaps or clarify the applicability of such laws.
  • Policymakers should fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of COVID-19-related misinformation and disinformation across the globe.

Table of Contents

Close