The Digital Frontier Promised us a More Open and Democratic “New Normal.” This Isn’t It.

Article In The Thread
New America / Song_about_summer on Shutterstock
July 6, 2021

Years before Darnella Frazier filmed the police murder of George Floyd, there was Khaled Saeed, a young Egyptian man who was beaten to death by police in 2010. Photos of the 28-year-old’s bloodied corpse, taken and circulated by Saeed’s brother, helped to galvanize Egypt’s historic 2011 protests in Tahrir Square. In the digital age, new normals come fast. Just ten years ago, the act of filming an incident of police abuse and sharing it with the world with just a few clicks was revolutionary. Today, police worldwide expect to be filmed, to the point that they increasingly wear the cameras on their own bodies.

But social movements’ reliance on these tools have come at a cost. Activists went from surveilling systems of power and oppression, to being surveilled themselves. Online organizing on platforms like Facebook and Twitter exposed citizen-journalists, activists, and human rights defenders to surveillance from state adversaries, both with and without cooperation from tech companies. The 2013 Snowden revelations showed just how vulnerable we all are to state surveillance, especially by the U.S. government and the rest of the Five Eyes.

Very few of us have the luxury of going off the grid, or opting out of technologies that collect our data, monetize it, and sometimes use it against us.

And governments weren’t content to keep tabs on us through our tech: increasingly, they also assert control over what we say, read, and watch online. Government-mandated censorship interfered with activists’ work, as did overly broad enforcement of companies’ rules for user-generated content. As platforms morphed from VC-funded startups to corporate behemoths, their products increasingly prioritized engagement metrics, creating an online media ecosystem begging to be exploited by pernicious influence operations.

Ten years later, we’re a far cry from the new normal that the digital frontier seemed to promise. In Egypt, digital surveillance seems more pervasive than ever, Putin is (again, and still) the president of Russia, and Tunisia is the only country to have made it through the Arab Spring with anything resembling a transition to democracy. Post-Trump, even Americans, in our steadfast optimism that technology, innovation, and free markets will save the day, have soured on digital technologies’ democracy-enhancing promise. And the COVID-19 pandemic means that more people are relying on digital technology to work, play, and just live than ever before. Very few of us have the luxury of going off the grid, or opting out of technologies that collect our data, monetize it, and sometimes use it against us.

If we want to preserve our democracies, and protect our rights, this new normal is not tenable. We must build something better. Our research group at Ranking Digital Rights is working on one critical piece of this: pushing Big Tech to respect people’s rights.

In her 2012 book Consent of the Networked, Rebecca MacKinnon called on global civil society to hold private companies accountable for their role in violating internet users’ privacy, freedom of expression, and other human rights. The next year, she founded Ranking Digital Rights to distill the expert consensus on business and human rights in the tech sector into a normative methodology that could be used to assess some of the world’s most powerful tech companies’ respect for human rights (or lack thereof) and to rank them against each other. The idea was to encourage a race to the top wherein companies would compete to improve their ranking, and investors committed to social responsibility would use these rankings to decide where to invest.

If MacKinnon’s vision was deceptively simple, executing it was anything but. We spent the better part of three years identifying the core issues where tech and human rights intersect, determining what companies should do to demonstrate respect for human rights, and scouring literature on freedom of expression, privacy, data protection, network shutdowns, state surveillance, and more. We held dozens of stakeholder consultations with experts from academia, civil society organizations, government, and companies themselves to ensure that the standards we were setting would represent the collective best wisdom of the global digital rights community. The inaugural RDR Corporate Accountability Index was published in late 2015.

Five iterations later, we know the RDR Index and our advocacy have an impact. The RDR Index methodology, regularly updated to account for new technological and geopolitical developments, is recognized in our field as the gold standard of corporate norms for tech and human rights.

Basic transparency about government demands for user data and for content removals, and about content moderation on social media platforms, is now completely normal — this is a seachange from 2015, when company representatives confidently told us that our requests were unrealistic and even counter-productive: somehow, the companies argued, transparency about content governance would only enable unidentified "bad actors" to game the system — whatever that means. Companies also used to insist that mass market messaging apps would never be end-to-end encrypted, even accusing us of being on an “ideological crusade” too out of touch with reality to take seriously. Today, the most popular messaging apps have at least the option of end-to-end encryption.

There’s a lot of work yet to be done, and there’s room for a wide range of accomplices in this fight for corporate transparency and accountability for human rights in the tech sector. This is especially true because tech companies, motivated by greed and blinded by the convenient fiction that more technology is always better, will continue to roll out new products and services without conducting adequate due diligence on the risks they could pose for human rights. At this rate, we may never get to an idyllic new normal. But we can keep bending the long arc of history further toward justice, so that technology can continue to be leveraged in the public interest, enabling people to speak, seek information freely, and document injustice without fear that the camera is also pointing back at them.

You May Also Like

Will we Govern Tech? Or Will we Let it Govern Us? (Ranking Digital Rights): In this essay, we discuss that without more transparency from tech companies, we risk losing their benefits as we try to mitigate their harms.

Moving Fast and Breaking Us All: Big Tech’s Unaccountable Algorithms (Ranking Digital Rights): This piece shows how much of the technology that is driving revenue for the world’s most powerful digital platforms is accountable to no one—not even the companies themselves.

It's Not Just the Content, It's the Business Model: Democracy’s Online Speech Challenge (Ranking Digital Rights): This report articulates the connection between surveillance-based business models and the health of democracy.


Subscribe to The Thread monthly newsletter to get the latest in policy, equity, and culture in your inbox, and curated updates from New America you don't want to miss.