This past March, Idris Elba went live on Twitter to tell the world he tested positive for COVID-19. However, he had more to say beyond his diagnosis. “Something that’s sort of scaring me when I read the comments and see some of the reactions is: My people—black people, black people—please, please understand that coronavirus … you can get it, all right?” A related headline shared on Facebook read, “People Of Color May Be Immune To the Coronavirus Because of Melanin.” Indeed, the myth that black people cannot get the coronavirus was so prevalent that one major city dedicated public health resources to building an ad campaign aiming to dispel the rumor.
This is just one example illustrating how COVID-19 has sparked a significant new wave of online misinformation and disinformation across the globe. By April, one in three people across Argentina, Germany, South Korea, Spain, the United Kingdom, and the United States said they had seen false or misleading information on social media linked to the coronavirus.
In a time when the public must be armed with the most accurate information to combat this pandemic, platforms have invested significant resources in connecting users to authoritative information, moderating and reducing the spread of misleading content, and altering advertising policies to prevent exploitation and the marketing of misleading products and items. Although these efforts are valuable, platforms need to do more to provide transparency and accountability around how these initiatives are being implemented and how they are impacting users and their online expression.
In our new report, New America’s Open Technology Institute (OTI) explores how eight internet platforms—Amazon, Facebook, Google, Reddit, TikTok, Twitter, WhatsApp, and YouTube—are addressing the rapid spread of COVID-19-related misinformation and disinformation on their services. The report concludes by offering recommendations on how these platforms can improve the efficacy of their efforts and also provide greater transparency to their users and the public. In addition, the report includes recommendations on how U.S. policymakers can encourage further accountability and support efforts to combat the spread of misinformation and disinformation during this time.
The recommendations in the report outline that internet platforms should:
- Partner with reputable fact-checking organizations and authoritative entities such as the WHO, CDC, and public health organizations to verify or refute information circulated through organic content as well as advertisements.
- Fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of COVID-19 related misinformation and disinformation.
- Educate users about potential attacks and scams related to COVID-19 that may appear on the platform and on methods to avoid becoming a victim of such nefarious efforts.
- Institute a public interest exception policy that enables companies to leave content posted by world leaders and elected and other government officials on their services, even if the content has been fact-checked and deemed to contain misinformation. In such instances, the company should append a label to the content that provides additional context. In cases where the content posted by such officials could result in imminent harm, companies should not apply the public interest policy and instead promptly remove the content.
- Provide adequate notice to users who have engaged with misleading content related to COVID-19 in the past and direct them to authoritative sources of information.
- Conduct regular periodic reviews of algorithmic recommendation and ranking tools, and recalibrate them as necessary so they do not direct users to or surface misleading content when they search for COVID-19 related topics.
- Remove or reduce the spread of content that has been fact-checked by vetted fact-checking organizations and deemed to contain misinformation.
- Publish a detailed description of these policies online including examples of how these policies are enforced. Companies should also provide public notice if these policies change and should include an archive of past policies.
- Explain to users to what extent content that violates these policies is reviewed and moderated by human reviewers and by automated tools. Users should be notified of any updates to these procedures.
- Provide adequate notice to users who have had their content removed or who have had their content downranked.
- Give users the opportunity to appeal moderation decisions which have resulted in the removal or suspension of their content and accounts in a timely manner. Users who flag content and accounts should also have access to an appeals process.
- Publish a detailed outline of their ad content and targeting policies online including examples of how these policies are enforced. Companies should also provide public notice if these policies change and should include an archive of past policies.
- Explain how the company’s ad content and targeting policies are enforced and whether and how this process is reliant on automated tools and human review.
- Establish and disclose a comprehensive process to review ads and targeting categories that are related to COVID-19, as they can have significant real-life consequences. Companies’ policies should require them to review ads before they are permitted to run on a platform. The company should disclose whether and how this process is reliant on automated tools and human review.
- Give advertisers who have their ads flagged or removed for violating COVID-19 specific advertising policies the opportunity to appeal these decisions. Given that companies are increasingly relying on automated tools to review ads during the pandemic, an appeals process is necessary to ensure legitimate advertisers are not undermined.
- Publish a COVID-19 specific transparency report following the pandemic that outlines the scope and scale of content moderation efforts and efforts to reduce the spread of misinformation during this period.
- Publish a COVID-19 specific transparency report that includes data on the number of listings that the company has removed and the number of sellers the company has banned for violating its COVID-19 specific policies as well as its pre-existing commerce policies. This pertains to companies operating a marketplace or e-commerce service.
- Provide periodic public updates on content moderation, advertising policy enforcement, and commerce policy enforcement efforts during the pandemic. This is particularly important given that the pandemic is likely to be ongoing for some time.
- Expand their reporting to include information on their efforts to remove or reduce the spread of misleading content in their general transparency reports, if they do not currently publish this information.
- Create a publicly available online database of all ads in categories related to COVID-19 that a company has run on its platform. This database should include search functionality. In order to protect privacy, the information in this database should not enable the identification of users who received the ad.
- Publish a COVID-19 specific transparency report that provides a granular overview of the platform’s advertising policy enforcement procedures.
The U.S. government is limited in the extent to which it can direct platforms how to decide what content to permit on their sites. However, in the context of the pandemic, the U.S. government can take certain steps to improve accountability mechanisms from platforms and to support efforts to combat the spread of misinformation. In particular,
- Policymakers should enact rules to require greater transparency from online platforms, including regular periodic reporting regarding their content moderation, ad targeting and delivery, and commerce enforcement efforts.
- The FTC should enforce Section(5)(a) of the FTC Act as appropriate against businesses that engage in unfair and deceptive trade practices during the pandemic, including through online ad campaigns and e-commerce.
- Government agencies and representatives should ensure that they are disseminating verified information related to the pandemic and are not contributing to the spread of unproven or debunked information.
- Government public health officials (such as those from the CDC) and relevant agencies (such as the U.S. Department of Health and Human Services and the Federal Emergency Management Agency) should collaborate with internet platforms to provide and promote verified and legitimate information related to the pandemic on their platforms. These entities should also help debunk misleading claims and information using their own online accounts.
- Given the increase of misinformation-fuelled discrimination, policymakers should clarify that all offline anti-discrimination statutes apply in the digital environment. Congress and state legislatures should also enact appropriate legislation where necessary in order to fill gaps or clarify the applicability of such laws.
- Policymakers should fund vetted fact-checking organizations around the world to ensure that fact-checking efforts can adequately tackle the growing volume of COVID-19 related misinformation and disinformation across the globe.