Indirect Swarming and Its Threat to Democracies

A New Frontier in Online Harassment
A woman running from eyes, hands, and speech bubbles coming from a screen; a representation of indirect swarming.
Jan. 29, 2024


Internet users may remember “Gamergate” in 2014: What began as a flaming blog post by a former partner targeting Zoë Quinn, a video game developer, exploded into a misogynistic online harassment campaign against women gamers. The harassers sent death and rape threats and engaged in “swatting,” the dangerous practice of making hoax phone calls to a target’s home, driving their female counterparts out of the industry.

Gamergate set the template for what is now known as “networked harassment,” a phenomenon in which an amplifier, a highly networked account, signals to their followers to harass a target or targets. Under the veneer of trolling, that “it was just for fun,” the amplifier denies their intention to harass by using coded language that only their online community understands. Gamergate provided the playbook for how to create cracks in a democracy’s foundation and to do so at a speed and scale largely unseen in the offline world because of the resources and time needed to organize such efforts.

A decade post-Gamergate, many legal systems, including here in the United States, continue to take a perpetrator-centered approach that is ill-suited to address these forms of networked harassment in which there may not be a clearly identified perpetrator. The kind of harassment that takes place at such speed and volume as seen during Gamergate is akin to mob violence in the real world. Our legal systems are playing catch up to rapidly advancing technology. And while legal systems grapple with the right approach to online safety and free expression, social media services continue to grapple with the challenges of harmful, offensive, socially damaging speech taking place on their platforms. They must navigate various cultural norms and legal rules about impermissible speech in different countries, and they are generally attempting to do so with trust and safety teams and budgets that have contracted since 2020.

These trends come at an alarming time. This election year, at least 70 countries are going to the polls with half the planet’s population residing in these countries. From Bangladesh and Taiwan in January to Ghana and Venezuela in December, this would comprise one of the largest combined electorates in history. A political candidate running in one of these elections who is a woman, identifies as nonbinary, or is a person of color may unfortunately face a disproportionate probability of being harassed online. There is thus an urgency to address and conceptualize this issue because amplifiers who instigate networked harassment can currently act without any fear of consequences.

To combat such attacks, we at the Open Technology Institute propose using a public health lens that focuses less on the intent of the harasser and more on the outcomes affecting the victims and society at large. We know that networked harassment during elections is likely to have a chilling or silencing effect on political candidates who are targeted. The literature on this topic includes accounts of “brigading” and “dogpiling” to describe coordinated online attacks instigated by an amplifier that result in a sudden increased volume of attacks on a target.

We call this “indirect swarming,” and there is a real danger of such harassment transforming into real world violence. We propose a new method to help platforms inoculate users against the risks of indirect swarming, which we call the “protective correlate.” The combination of under-resourced social media companies and the continued lack of legal conceptualization around networked harassment have enormous consequences for platforms’ accountability and our democracies. This policy brief lays out how to deal with this threat to democracy by detailing the fundamental characteristics of indirect swarming, suggesting methods that could mitigate against its malign effects, and offering policy recommendations.

The Dangers of Indirect Swarming

Indirect swarming is defined as harassment that increases exponentially and is characterized by the presence of an amplifier whose covert signaling to their followers precipitates a sudden increase in harassing posts and engagement activities, followed by a precipitous fall. Indirect swarming is also characterized by the amplifier’s use of linguistic innovations to circumvent content moderation on social media platforms. This includes using memes, symbols, irony, and other methods of covert signaling. Amplifiers may adopt covert signaling by using phrases like, “You know what to do!” to maintain plausible deniability and circumvent content moderation filters. To our knowledge, this form of networked harassment has not been quantified, nor has the term “amplifier” been explained in detail. This brief and a forthcoming publication I am working on with my research team are the first to do so (Abdul Rahman et al., forthcoming).

The lack of language and legal conceptualization around the phenomenon of indirect swarming has real consequences: First, there is no consensus around the significance of indirect swarming despite its potential mental health impacts and safety implications to targets. A large and ever-growing body of research demonstrates that online harassment has a chilling or a silencing effect on speech and can be used to police people’s behavior online and exclude them from the public sphere—both online and offline. Users may simply choose to leave the platform rather than report the abuse.

Second, the risks of indirect swarming are likely to disproportionately impact women, people of color, and those who identify as LGBTQIA+, among other historically marginalized communities.

Third, because amplifiers use coded language and covert signaling, indirect swarming is likely to go undetected or unflagged by platforms, especially if they are reliant on automated, artificial intelligence (AI)-driven moderation tools. Thus, examining this phenomenon and its empirical patterns is critical if platforms, and eventually laws, are to address them effectively. More accurate reporting by users and expanding the means by which users could do so means that we would be able to hold platforms to greater accountability and transparency.

Indirect swarming also calls attention to the potential risks posed by generative AI on online platforms during elections, underlines and questions the transformative nature of social media, and demonstrates how social media platforms have generated new modes of social interaction and communication. Generative AI accelerates the ability of bad actors to mislead and influence voters at speed and at scale. What do I mean by this?

First, the development of generative foundation models (GFM) allows information to recombine quickly, convincingly, and at scale in a kind of GFM-driven deception. GFM can enable novel forms of communicating and co-creating so that political messaging for electoral campaigns could be nuanced and targeted at specific communities.

Second, there is also the issue of generative AI in political advertising. Thus far, in the United States, three states, Florida, South Carolina, and New Hampshire, are considering legislation that would govern the use of AI in this manner. So-called “deepfake” technology would allow for the voice and likeness of political candidates to be used in nefarious ways. In 2023, the states of Minnesota, Michigan, and Washington have enacted legislation to address this.

Amplifiers, as a category of actors, could instigate the targeting of political candidates. And their followers can harness generative AI to harm these targets at a scale and speed internet users around the world would not have seen before. It is a dangerous time for democracies.

Amplifiers as a Category of Actors

In our work (Abdul Rahman et al., forthcoming), my research team and I suggest possible ways to categorize amplifiers. There are “owner amplifiers” like Elon Musk, who owns the platform X, formerly Twitter; those with outsized influence such as Mike Cernovich in the alt-right space; and more traditional influencers in niche communities such as Tim Pool and Lara Logan. We suggest that more case studies are needed to understand the different categories of amplifiers. We quantify amplifiers’ influence in a case study, showing how their posts catalyzed a swarm minutes after they had posted. At the peak of the indirect swarming, the women targets who had resigned from Twitter’s Trust and Safety Council were receiving 4,000 notifications about the tweets, quote tweets, replies, and retweets from the swarmers every 20 minutes.

The phenomenon of networked harassment in general, and indirect swarming in particular, reveals the limits of the law in policing perpetrators online. However, we argue that understanding how amplifiers contribute to the outcome of indirect swarming is important, as it demonstrates the novel risks and harms emerging from online platforms that warrant the development of new governance approaches—by companies and governments.

Generative AI could add yet another complicating factor to the indirect swarming phenomenon if it is deployed to harass targets. Generative AI poses a real risk that users may not be aware of who or what they are engaging with and instead believe that there is a real user at the other end. Indeed, experts already have seen this phenomenon operate at significant scale and speed with the use of bots to spread disinformation. Generative AI thus threatens to supercharge misinformation and disinformation by making it simpler for amplifiers to create and disseminate, including making the posts more convincing and its messaging more targeted. Generative AI could therefore be deployed by amplifiers’ followers to scale up their harassment campaigns.

How does this affect other users on the platform? Does the use of generative AI by amplifiers make it more difficult for non-amplifiers to express themselves? This idea tracks with the interests underlying the First Amendment, which was meant to protect against the U.S. government’s ability to restrict acts or words that express a message, idea, or viewpoint. Would the indirect swarming of political candidates running for elections lead to them being silenced or, worse, leaving the platform altogether? Indirect swarming could thus further impinge on their rights to express themselves on such platforms without an unreasonable fear of harassment. In indirect swarming, there are two categories of actors that can be considered vectors of harassment: the amplifier, who singles out the target and covertly signals to a niche audience, and those who engage in the direct swarming online or offline. In the context of elections, it is worth establishing guardrails.

My research team and I thus argue that a First Amendment-protective approach would allow for the regulation of foreign amplifiers and their behaviors to safeguard America’s democracy. We provide two reasons. First, digital social technology has fundamentally changed the foundations of society’s norms and institutions. Second, the repercussions of the technological changes as described above are severe for this democracy. To be clear: We are not advocating for the censorship of political speech, nor curtailing the rights of voters to register legitimate criticisms of their elected officials. Rather, we wish to call attention to the fact that large-scale, intense harassment that has the potential to incite offline violence works against the principles of free and open expression by silencing targets. In other words, the harassment we are concerned with lessens the amount of speech in the public sphere and works as a de facto form of political censorship.

The Protective Correlate

A protective correlate is a term used in biostatistics to denote a measurable marker that is statistically related to a clinical endpoint. My research team and I propose measuring the level of engagement to the number of posts at a particular point in time as the basis for this protective correlate (Abdul Rahman et al., forthcoming). The method is based on the observation that indirect swarming can be characterized by certain distributions of posts over time, including the composition of posts and engagement activities, as well as their linguistic properties. To detect cases of indirect swarming, we model the distribution of tweets—including their composition and linguistic properties—and determine parameters of these distributions that are indicative of indirect swarming. We do not advocate for an automated approach, as online platforms should account for intersectional identities and how they impact users differently, not to mention the specific contexts of harassment and the histories of the players involved.

A target who has been harassed or a bystander must be able to report indirect swarming to social media platforms. The platform staff can then run the protective correlate and determine if there is indeed a sudden rise in the volume of posts within a short time of an amplifier posting. If so, the platform staff can act to limit or freeze engagement activities on the target’s post so that the swarming activity will come to a halt. One change we propose for platforms that do not already have such a feature is to establish a “circle of friends.” The idea is that those who run the risk of facing indirect swarming could appoint up to five friends to check their timelines for them and report any incidences of harmful activity. Through this mechanism, we reduce the disproportionate burden on the target when they face indirect swarming via a trusted circle who can intervene. The concept of “close friends” is already used by several platforms although not for this particular use-case. This shows that the feature is salient due to its adoption by users globally.

Future Research

Indirect swarming is a little-understood phenomenon, although it has likely been very widely experienced. Its characteristics and consequences urgently deserve further study. Future research should develop additional case studies to understand the different categories of amplifiers. It may also be worth exploring the relative influence or popularity of a target versus an amplifier in the case of indirect swarming, and how this may impact the level of swarming experienced by the target.


A decade after Gamergate, social media platforms and governments around the world are failing to help targets of networked harassment. There is a new phenomenon of indirect swarming under this taxonomy of networked harassment that I predict would become salient in this year of elections. The changes on social media and the advent of generative AI warrant serious efforts to address and better protect users online. If amplifiers have their way on online platforms and can direct their followers to harass political candidates, and if these followers are able to leverage generative AI to make their posts more convincing and messaging more targeted, many democracies are likely to face significant challenges. The “protective correlate” is a method that my research team and I have proposed to protect users from indirect swarming. Social media platforms should study and adopt this method to better protect users and safeguard our democracy. Researchers and companies should also conduct research on the various categories of amplifiers and their impact on online platforms.


The author would like to thank Stephen Rea, PhD, for his ideas and contribution to this piece.


E. Abdul Rahman, S. Rea., G. Campaioli, S. Di Bartolomeo, S. Mehta, M. Trans, D. Keim, L. Wörner, and B. Ochsner, Forthcoming, “The Anatomy of Indirect Swarming and Its Potential Threat to Democracies.”

Related Topics
Algorithmic Decision-Making Content Moderation