From the campaign to the coronavirus, the misinformation infodemic is compromising our democracy. Can holding digital platforms accountable for their surveillance-based business models help neutralize viral falsehoods, hate speech, and other harmful online content?
Misinformation was already plaguing the 2020 U.S. presidential campaign, and tech companies like Facebook, Google, and Twitter have struggled with how to manage false claims, hate speech, and extremist speech on their platforms. The arrival of the novel coronavirus pandemic—and the so-called infodemic that accompanied it—has only made the challenge more urgent. But despite taking unprecedented steps to counter misinformation, these companies and the policymakers entrusted with regulating them continue to ignore the real source of the problem: the targeted advertising business model and the algorithmic systems that drive it.
Join Ranking Digital Rights (RDR) for the launch of our second #ItstheBusinessModel report on Wednesday, May 27, at 11:30 EDT, where we will propose upstream solutions to the downstream problem of viral harmful content.
In a conversation led by RDR Director Rebecca MacKinnon, we’ll consider how Congress, institutional investors, civil society and other stakeholders can leverage federal privacy legislation and stronger oversight of corporate governance to address the societal harms caused by misinformation, extremism, hate speech, and other dangerous content—without compromising democracy, fundamental human rights, or the First Amendment.
Rebecca MacKinnon, @rmack
Director, Ranking Digital Rights
Gaurav Laroia, @GauravLaroia
Senior Policy Counsel, Free Press
Nathalie Maréchal, PhD, @MarechalPhD
Senior Policy Analyst, Ranking Digital Rights