Suspending Trump Was The Right Call.
It’s difficult to establish a brightline test for deplatforming. But in this case, the potential for harm was too great.
We are living through unprecedented and uncertain times, facing threats not only to our safety, but also to our democratic system of government. Applying the principles that guide the Open Technology Institute’s work on protecting freedom of expression in digital spaces is challenging. Last week, the institutions that represent the very core of our democracy came under attack as insurrectionists sought to block the certification of our recent presidential election. Immediately prior to the storming of the Capitol, President Trump spoke at a rally of his supporters and repeated his false invective claiming that the election was stolen. The President’s words at that rally continued a narrative that he has been driving on social media and spurred his supporters into violent action by stoking their anger and speaking to their darkest impulses. In the days since January 6, we continue to learn more about the harms wrought by the mob President Trump incited, as well as the extent of his efforts on social media to fan these flames and undermine the peaceful transition of power. While the scale and impact of the violence that day was shocking to many of us, the President’s calls to action via his social media accounts made the outcome almost inevitable.
In the aftermath of the insurrection at the Capitol, Twitter permanently banned President Trump’s account, and Facebook and YouTube announced suspensions of his accounts that will last at least through the end of his term as president. Many in the digital rights field, including civil society organizations, thought leaders, and academics, have questioned the validity of these actions and argued that they set a dangerous precedent. Even world leaders weighed in with their concerns. We agree that social media platforms provide a critical forum for free expression online, and that their role as gatekeepers—deciding who has access to this forum and what they can post—means we must carefully question any decision to suspend or stifle speakers. After our review of Trump’s use of social media to incite the violent mob and his continual efforts to encourage their actions, and given the clear content moderation policies of these companies in this instance, we support the decisions by Twitter, Facebook, and Youtube to suspend the President’s accounts.
It is important to recognize that these are private companies, not government entities, making content moderation decisions on their own platforms. While the First Amendment limits the extent to which the U.S. government can dictate what type of content should be taken down or lead to suspension, it does not directly apply to such decisions by private companies. But, as noted, social media serves as an important forum for free expression, and the major tech platforms have established and published rules or community guidelines outlining how they make these content moderation determinations. OTI has long called for strong transparency and accountability measures around these rules. In this instance, Twitter and Facebook spelled out their analyses on how the real-world implications of the President’s online activities could threaten public safety, and YouTube similarly announced its public safety rationale. Further, their decision to ban the President seems even more appropriate as messages promising more violence spread among his supporters online.
OTI has supported a public interest notification approach with regard to online content from public officials that violates a platform’s rules. We see this as an appropriate way to balance free expression interests—including the need for the public and particularly the electorate to have information about what government officials and candidates are saying—with the harms that can come from misinformation, disinformation, or violent rhetoric. To that end, we supported Twitter’s implementation of a public interest notification policy that allowed tweets from public officials and candidates (like President Trump) that otherwise violate its policies to remain on the platform with a label informing the user about the policy they violate (e.g. glorifying violence, misinformation).
However, since the election, the weight on each side of the scale that the public interest notification policy balanced has shifted. The harm from the President’s messaging—both in terms of inciting and glorifying violence, and in terms of threatening our democratic institutions and electoral process—has vastly increased, and a label simply cannot undo the damage caused by allowing the posts to remain, or for President Trump to continue to have access to the platform. The public interest notification policy is no longer adequate to protect against the harms caused by President Trump’s posts.
However, it is extremely difficult to establish any brightline test for when the public interest notification policy would rise to the level of inadequacy in other contexts, and there are many important considerations to review in order to appropriately balance the promotion of free expression online with avoiding or minimizing the harms from violence, attacks on democratic institutions, and spreading dangerous misinformation or disinformation. We do believe it is important to avoid relying on a specific set of events to attempt to create generalized rules for when speech by public officials crosses a boundary such that the threat of harm outweighs the free speech interests involved. We cannot yet completely draw those lines to apply to situations going forward. However, we can say that in the case of the President’s activities on YouTube, Twitter, and Facebook, at some point after the election, he clearly passed the point where public interest notification was sufficient to avoid the harms his posted content would cause, and deplatforming became the appropriate response.
To be clear, this is not the first instance of the President’s words motivating violent actions. His tweets while in office have prompted xenophobic actions, and his history of encouraging hate groups and political violence is well documented. Yet, last week, Trump incited not only violence, but also a direct attack on our democratic institutions and the transition to a new administration. And the storming of the Capitol led directly to the deaths of five people. This all-too-real world consequence of online incitement to violence is a factor that platforms must consider more carefully going forward as they evaluate potential threats to public safety.
These are difficult questions that platforms will have to continue to grapple with, especially as misinformation and disinformation continue to flourish in online spaces. Policymakers are already citing the tech companies’ responses to last week’s insurrection as support for arguments that Congress should take immediate action to regulate tech platforms. But the debate over the role of government in ensuring that we protect free expression online while holding tech companies accountable began long before last week, and it was already apparent that we need a thoughtful and deliberate debate on appropriate policy responses to achieve these concurrent goals. The storming of the Capitol and the role of social media in inciting and responding to those events does not actually change the variables of this equation. However, it does raise the stakes, and likely the temperature, of the debate. Now that companies have responded in an attempt to mitigate the immediate threat of incitement to violence through their platforms, policymakers should take the time needed to develop a measured and appropriate response.
Editorial disclosure: New America has received funding from Facebook and from Google, which owns YouTube. More information on our funding can be found at https://www.newamerica.org/our-funding/.