The Market vs. Democracy

Weekly Article
Anikei / Shutterstock.com
Feb. 8, 2018

If you spend enough time browsing social media, there is a chance you saw an intriguing story shared and re-shared in recent days about how agents of NATO—a long-standing strategic alliance between the United States, Canada, the United Kingdom, and most of continental Europe west of Kharkiv, Ukraine—sprayed chemicals over Poland to damage the well-being of the local population. The original Polish-language account has been spread far and wide with great certitude. Given you are reading this Slate piece about internet-based disinformation, you may already suspect the truth: The Poland story is entirely fake. But would you have been so skeptical if you had seen it shared on social media by the people you trust most?

In recent days, researchers have shown that agents of the Russian government have pushed the Poland story—an example of pure disinformation in its most egregious form—on the most visible social media platforms. And though the long-standing chemtrails controversy has been verifiably (and repeatedly) debunked, many social media users continue to believe it, making them particularly vulnerable to the false story about chemicals sprayed on an unwitting population. We know that these sorts of conspiracy theories do not necessarily recede with time. Instead, they are often so intelligibly and inflammatorily recounted that they continue to spread, affecting susceptible readers who might not question their veracity or the motivations of their propagators.

These stories aren’t harmless. Consider the foreign policy implications if large numbers of people in Poland (and other countries that sit squarely between the spheres of Western and Russian power) believed this conspiracy theory. Needless to say, scalable belief in the concocted account of a NATO plot to poison swaths of Eastern Europe is hugely beneficial to Russia, which for years has sought to taint the Western allegiance’s image and thereby undermine its mission.

The purveyors of disinformation have clearly determined that large-scale social media platforms offer a tremendous opportunity to move people to believe their messaging. Key to their tremendous ongoing success is their use of the audience segmentation tools developed by the leading internet advertising platforms. Using these technologies, disinformation operators can target demographic groups that are homogenous across certain set of characteristics—for instance, groups of strongly liberal marginalized teenagers who live in large American cities and who take an interest in reading about the events that took place last year in Charlottesville, Virginia—with great precision and accuracy.

As my co-author, Ben Scott, and I describe in a recent report on the ways that disinformation operators leverage web technologies, herein lies the fundamental flaw in the market logic underlying the largest internet platforms. (Disclosure: Scott and I are affiliated with New America; New America is a partner with Slate and Arizona State University in Future Tense.) The digital advertising ecosystem has, over the past 15 years, solidly established itself as the de facto economic backbone of the commercial internet. It implicitly aligns the interests of leading internet platforms that own and operate the world’s largest advertising markets with those of advertising clients themselves, whether they are consumer-facing retail companies trying to sell shoes or foreign actors with nefarious intent. That makes tackling the disinformation problem all the more difficult.

To date, the public knows little about what, exactly, the big tech companies are doing to identify and work against the efforts of propagandists. The industry has proposed measures promoting greater transparency, and that is fair and well. Requiring that ads disclose who paid for them could help researchers and journalists, particularly if the advertisers have political motives for their ad campaigns. But I fear that transparency will do very little to limit the effects of disinformation operations.

A more thorough solution must begin with the segregation of the interests of the disinformation agent and the internet platform. In the short term, internet companies might decide to limit the activities in which known disinformation agents can engage on their platforms. On Wednesday, for instance, Twitter announced a change to how embedded tweets display on other websites; April Glaser writes on Slate that this may help fight the bot problem by representing the relative popularity of shared content more accurately.

Further down the line, the industry might begin to try solving this problem at scale by developing advanced algorithmic technologies such as artificial intelligence that is able to detect and flag or proactively act against suspected attempts to promulgate disinformation. For example, Facebook, a company that I have worked for, has already begun taking steps to automatically detect and remove fake accounts and interactions from the platform and says it deleted tens of thousands of fake accounts in Germany before the country’s 2017 federal elections.

Such near-term efforts around transparency and the automated detection of policy-violating content will help. But these types of solutions will likely do little to limit the threat presented by nefarious disinformation operators who closely monitor these changes and constantly devise strategies to work around them. The industry must act upon this and work with government and civil society to counter and eradicate the deep-rooted societal harms wrought through long-term behavioral data collection, digital advertising audience segmentation, and targeted dissemination of sponsored and organic content. Accordingly, regulators around the world are already vociferous about the dangers that they believe leading internet platforms pose to society. They rightly point to the fact that we need comprehensive privacy and competition policy reforms to limit the impact of disinformation and other broad concerns surfaced by the leading internet platforms in recent years. What exactly should these reforms look like? I wish I knew, but for now I don’t. These are thorny problems. But acknowledging the fundamental alignment between the goals of the platforms and the disinformation purveyors is the right place to start this inquiry. If we pretend that the digital advertising industry’s business model has nothing to do with the ease with which bad actors can plant false stories, then we are missing something critical.

We have entered a new age defined by the digital technologies we have come to adore. Where the television and telephone once dominated, over-the-top video and social media now pervade.  But with these changes come a new set of challenges to keep our society safe and equitable. That means prioritizing our democracy over the market.

This article originally appeared in Future Tense, a collaboration among Arizona State UniversityNew America, and Slate.