When Transparency Also Needs Transparency

Weekly Article
Lenka Horavova / Shutterstock.com
Jan. 25, 2018

“If these reports are accurate, we are witnessing an ongoing attack by the Russian government through Kremlin-linked social media actors directly acting to intervene and influence our democratic process.”

Those were the words of Senator Dianne Feinstein and Representative Adam Schiff, in a letter to Twitter and Facebook on Tuesday. The two Democrats are urging the social media giants to investigate—or, in their words, to carry out an “in-depth forensic examination” into—a new propaganda campaign allegedly run by Russia-linked accounts.

The problem of manipulating major internet platforms by adversaries, most often Russia, transpired during public and congressional debates at the end of 2017. More than anything, these debates demonstrated how much political discourse in the United States (and in other countries, too) now depends on a handful of powerful platforms: Facebook, Twitter and Google. At both of the congressional hearings, politicians chastised the companies for their lack of attention to the activities of Russian operatives. The discussion focused on provocative targeted ads, amplified disinformation, and the mismatch between the companies’ claimed technological superiority and their inability to spot political ads targeting American voters and paid for in Russian rubles.

But there was one important issue that went largely unaddressed: if the army of trolls tried to manipulate the companies’ content removal processes and silence opposing views by submitting policy violation reports against authentic accounts. To address this concern, the platforms ought to begin disclosing more data in their transparency reports about terms of service enforcement and the nature of content taken down at governments’ request.

The Trouble with Trolls

Facebook and Twitter reported that, in 2017, they identified and blocked tens of thousands of fake accounts that were involved in spreading election-related content in the United States, Germany, and France. Many of these accounts were linked to the Russian Internet Research Agency, a now famous troll farm. While the companies demonstrated that they’re finally learning how to distinguish trolls from authentic users, and are starting to pay more attention to their activities, they stopped short of revealing whether these trolls tried to influence the companies’ removal procedures—and whether they restricted any content or accounts as a result.

Disinformation thrives and trends when promoted by troll armies. But that isn’t the whole picture: The effect of disinformation amplifies when internet gatekeepers silence alternative views, either at the request of governments, state-sponsored actors, or private parties.

This isn’t to suggest that the issue is new. Several years ago, for instance, Facebook already faced allegations that Russian trolls had misused its reporting mechanism. In May of 2015, in the midst of the Russo-Ukrainian conflict, thousands of Ukrainians appealed to Mark Zuckerberg to address the problem of the massive fake abuse reports coming from Russia. Even Ukraine’s President Petro Poroshenko attempted to get attention from Facebook management. A later petition on Change.org, this time also endorsed by thousands of Russian activists, put it bluntly:

For a long time we have seen the same scenario play over and over again. An army of shills on state payroll have been daily submitting thousands of policy violation reports, targeting popular bloggers who dare to criticize the Russian government. Facebook indiscriminately reacts to these reports by blocking the accounts of prominent Ukrainian public figures and Russian dissenters. Lately, the bans have become so frequent that we can now claim that Facebook has become an efficient tool of the Kremlin.

Facebook dismissed the allegations. “We did the right thing according to our policies in taking down the posts. I support our policies in taking down hate speech,” Zuckerberg said. And in response to the Change.org petition, Thomas Myrup Kristensen, head of Facebook’s office in Brussels, denied that large numbers of complaints influence its decisions: “It doesn’t matter if something is reported once or 100 times, we only remove content that goes against [our] standards.”

But here’s the problem with these denials: There’s no way to verify them. According to the 2017 Ranking Digital Rights Corporate Accountability Index, which evaluates internet companies on their disclosure practices that affect users’ freedom of expression and privacy, Facebook publishes no data about its terms of service enforcement. Neither does it disclose if governments or private entities receive priority consideration for flagged content. (Full disclosure: I’m a senior research fellow with Ranking Digital Rights.)

Such secrecy is understandable. These platforms are privately owned, and, to a large extent, they operate in their own closed universe according to their own private laws (content policies, terms of service, and community rules, for instance). What we see, or don’t see, on these platforms is decided exclusively by their content moderators, lawyers, and executives. They have the right to control how they enforce those regulations, without any obligation to disclose enforcement data. They aren’t accountable to the public unless they choose to be, or unless they’re summoned to Capitol Hill. Without data, the public is left with a tough choice: to trust Change.org petitions signed by thousands of affected users, or a carefully drafted denial from a corporate executive.

In an ideal world, we could rely on the judgement of Silicon Valley executives about what content should be acceptable online. After all, they’ve all made respectable promises not to be evil, to give everyone the power to share ideas and information, and to protect that freedom.

But history shows us that both humans and the algorithms they develop aren’t flawless. When faced with public outcry and the risk of bad publicity, the companies do correct their mistakes. However, there ought to be a more robust, transparent, and reliable feedback mechanism—one that would incorporate the collective wisdom of the billions of social media users.

Moving Transparency Forward

How to make moves in this direction? In their Washington Post op-ed, Wael Ghonim and Jake Rashbass proposed a public interest API model, which would offer a degree of visibility to the contents and origins of deleted materials on social networks. Of course, while promising, this idea  may take a long time for companies to internalize and implement. So, they’d be wise to start with disclosing more data in their transparency reports about how they enforce terms of service, and about the nature of content they ultimately remove at governments’ request. Because, at the end of the day, both the companies and users would benefit from better transparency.

For one, ramped-up transparency would help to ensure a more consistent approach to questionable content across platforms. In today’s social media world, the same content can be cross-posted and cross-referenced on several platforms simultaneously. Take how social media users can embed a YouTube video in a Facebook post and reference a Facebook post on Twitter. Yet consider how YouTube’s and Facebook’s definitions of “hate speech” differ. The latter’s definition doesn’t include “age,” while the former’s does. Though it’s likely safe to assume that this is just a simple omission, and that Facebook wouldn’t tolerate any sort of discrimination or hatred, this difference arguably speaks to how the underlying rules defining hate speech might vary. This, in turn, creates the risk of unintended inconsistency in interpretation and take-down decisions between the two platforms—for the same piece of content.

In addition, and maybe more salient to ongoing political chatter, greater transparency would help to mitigate platform manipulation. Just look at government requests that have demanded the removal of content that allegedly violates local laws. According to Twitter’s transparency reports, more than a third of all such requests received between July 2015 and June 2016 came from the Russian censorship authority Roscomnadzor. Most of the reported content ended up being taken down for violating terms of service, which makes the Russian government probably the most vigilant and trusted overseer of compliance on Twitter’s platform. This raises an important question: Does Twitter process requests from Roscomnadzor with the same level of priority as it does for all other users who complain about bad content?

On top of all that, publishing more data about terminated fake accounts, and about content violation reports submitted against legitimate users, wouldn’t only help to understand the extent of activities of malicious actors; it’d also enable civil society groups, academics, and users to participate in the debate on countering foreign propaganda and disinformation. Taking no action and keeping crucial data hermetically sealed would only feed opposite concerns: The platforms’ user communities might feel that content is being excessively filtered, while legislators might fear that companies aren’t doing enough to fight illegal content.

Merely shaming Facebook, Twitter, or Google for a lack of transparency isn’t constructive, and would do little more than play into further demonizing them. (Remember: These companies set high expectations for transparency and respecting international human rights when they started publishing transparency reports.) And yet, as committed as they are to calling for government transparency, they ought to be equally committed to transparency of their own practices.