Democracy in the Digital Wild

Can Democracy and Digitization Co-Exist?
Source: jamesteohart /
May 3, 2023

Nearly 35 years after the World Wide Web's inception, the relationship between the internet, emerging technology, and democracy has never looked more uneven or uncertain. On the one hand, digital connectivity has been a boon for democratic speech and participation, allowing people all around the world to organize, scrutinize governments, and make their views heard. On the other hand, the digital domain appears to be an increasingly fractious and wild frontier where threats to human security and anti-democratic practices are on the rise.

Autocrats may now use digital technologies to surveil and control their populace, as well as repress dissidents.Artificial intelligence is accelerating the risk that discrimination and inequality will become more entrenched. Global data flows are set to triple by 2026, according to the UN, while the connectivity divide between rich and poor countries and communities is growing. Online platforms based on algorithms designed to capture attention and amplify divisive content are fueling political and social polarization, reducing trust in institutions, and causing an information crisis, all of which are destabilizing democracies and contributing to political violence around the world.

The digital domain's novelty, complexity, and rapid innovation make it challenging for governments to keep up with and efficiently regulate emerging technology. As a result, states, organizations, and communities are increasingly divided about the nature of sovereignty, privacy, public goods, and human security. Yet, current venues for resolving disputes over tech governance are too limited in scope and ambition to provide the multiplicity of stakeholders who have a vested interest in outcomes with meaningful avenues to shape the digital future.

On aims, there appears to be universal agreement among open societies. Governance should focus on preventing, managing, and resolving digital harms while also preserving human rights and ensuring security. Simultaneously, it should foster innovation, creativity, and open collaboration, ensuring that digitization serves as a driver of human prosperity. Less evident is what types of governance may achieve such goals, especially given the contradictions that exist between them, not to mention the difficult political and technical hurdles of establishing and enabling such institutions amid power disparities, rapid technological change, and a fracturing of the global order. All of the above circumstances leave open to question what it means when technologists, corporations, governments, communities, and citizens say they want “democratic digital governance.”

In January, New America’s Planetary Politics program joined with the Sie Center at University of Denver’s Korbel School and the Denver Democracy Initiative to explore this question and the challenges of preventing, mitigating and managing digital harms. Two dozen top academics and civil society leaders with experience in digital rights and governance from North America, Latin America, South Asia, and Europe attended the workshop. Our conversation began with a debate about what “democratic governance” means in the context of the digital domain. We landed on a working definition that consists of two mutually reinforcing parts. Governance refers to systems and processes that affect how applications and technologies work. "Democratic" refers to participatory monitoring of technology-influencing systems, not to a political system with elected representatives.

We then considered three models for achieving democratic governance. The first was the "gatekeeper" concept, which focuses on empowering institutions to police digital space—that is, to establish and enforce norms, regulations, and other safeguards to prevent harm and defend rights. Governments, particularly democratic ones, are traditionally regarded as the most legitimate gatekeepers.The second model, on the other hand, stresses increased public participation as a means of achieving democratic governance goals. Policies, institutions, and the design of online spaces, in this vision, would attempt to increase individuals' ability to view themselves as part of a polity and then act in its interests. This concept is based on the pragmatist intellectual tradition, in which the American educator and philosopher John Dewey defined democracy as a way of life rather than a set of institutions.

Our conversations focused on how these three governance models have played out since Sir Tim Berners-Lee first issued his historic 1989 proposal on the management of distributed information systems. What we agreed on is that digital governance is now an urgent global priority but there is no one size fits all solution or set of solutions. Governance of the digital wild can’t and won’t happen in a vacuum. We need to keep in mind that neither governments nor tech corporations are monoliths; conflict between states and technology firms over regulations is as much a reflection of internal tensions as it is external pressures. Recent history has lessons for us when it comes to rule making, norm setting and harm prevention, mitigation and management. However, analogs in the physical world—like maps with hard geographic borders and boundaries—are likely to gain the greatest traction, fastest with most stakeholders, but they may not make sense when applied in a domain where power is distributed and boundless. Most importantly, pragmatism and a sense of perspective can go a long way toward improved digital governance. Tensions are inevitable, we just need to remember that democracy is a generational process not an idyll destination.

Digital Governance Was Always Hard, But Now It’s Getting Harder

Though the internet and then later social media applications such as Facebook and Twitter were initially celebrated as instruments of democracy, the last decade has revealed the darker possibilities of digital technologies, as harms ranging from election interference to financial crime to mass violence have proliferated. In Myanmar, Facebook’s algorithms amplified disinformation and hateful content that incited violence against Rohingya Muslims, contributing to a brutal ethnic cleansing campaign that displaced more than 700,000 people and left as many as 10,000 dead. In a sign of the growing sophistication and impact of cybercrime, a ransomware attack in 2021 forced the temporary shutdown of 5,500 miles of the Colonial Pipeline in the U.S. East Coast region, causing gasoline and jet fuel shortages that triggered a rise in gas prices.

World leaders like President Joe Biden, UN Secretary-General Antonio Guterres and European Commission President Ursula von der Leyen have all called for global digital governance based on human rights and democratic principles. Intergovernmental bodies, such as the United Nations, the OECD, UNESCO, and the G20 have digital governance agendas and are working to establish governing frameworks. Add to this dozens of multistakeholder and civil society initiatives, as well as those by tech companies, to bring some semblance of democratic governance to digital spaces.

Yet governing digital technology presents novel and complex challenges. Identifying these is an obvious first step toward better understanding the democratic governance solutions likely to help manage them. For one, “digital technology” encompasses a broad and complex array of applications, tools, and use cases, the impacts of which differ from one geographic or socioeconomic context to another. Another related challenge is the scale and pace of change. The world wide web has been around for only three-plus decades but it has upended human ideas of what is sacred, what is right, what is wrong, what is just, what is true and more, and more what is real in many parts of the world.

Adding to the difficulty is the many types of stakeholders involved in digital governance. Though born from a U.S. Department of Defense initiative, the internet developed as a loose collaboration between companies, universities, nonprofits, and individuals. The private sector has been critical from the start, and today much of the digital world’s critical infrastructure and data is owned, operated, and kept safe by companies. This means that businesses especially, but also NGOs and other actors, end up, either intentionally or by default, playing a large role in digital governance.

Extreme concentrations of power in the digital domain exacerbate the challenges above. Around the world, governments or state-owned enterprises control much of the physical infrastructure of the internet, a situation that enables those with jurisdictional or territorial control to weaponize it or shut it off. Companies and even individuals also wield immense power. Decisions made by the entrepreneur Elon Musk regarding how and whether to provide Starlink satellite internet service to the Ukrainian military in its defense against the Russian invasion have had a significant impact, at first in supporting Ukraine and later in hampering it. Owing to the logic of network effects, the platforms and content on the internet are largely controlled by a handful of firms. In 2020, the top five tech titans accounted for 20 percent of the U.S. stock market’s total worth, and almost 3 billion people used Meta-owned social media applications.

Expecting governments to regulate the digital domain according to democratic principles has proven to be wishful thinking. Even excluding autocracies, which frequently use digital tools to surveil, control, and abuse their populations, many governments with democratic political systems also act undemocratically in digital space. In democracies ranging from Brazil to India to the United States, government leaders and their organizations have used social media to spread disinformation, organize anti-democracy protesters, and undermine democratic processes.

Even when they are well-meaning, government representatives often struggle to understand digital technology. Witness the questioning of Meta CEO Mark Zuckerberg by befuddled U.S. Senators, or Supreme Court Justice Elena Kagan’s quip during oral arguments in the content moderation case Gonzalez v. Google that “[Supreme Court Justices] are not the nine greatest experts on the internet.”

The problem goes deeper, however. In her book Recoding America: Why Government Is Failing in the Digital Age and How We Can Do Better, Jennifer Pahlka, former U.S. Deputy Chief Technology Officer and founder of the U.S. Digital Service in the Obama White House, notes that, with the exception of the U.S. Defense Department, which has a budget sizable enough to enable it to compete with private companies when recruiting personnel, U.S. government agencies are unable to attract staff—such as programmers and technologists—who understand the cutting-edge of digital technologies.

Lack of digital literacy is also an issue for citizenries. We all eagerly embrace digital tools but often understand little about how they operate, the data they generate, and how this data is used. This means that people often facilitate undemocratic practices—such as by unwittingly spreading Russian-created disinformation in the 2016 U.S. presidential election. Democracy depends on citizens' ability to hold the government, corporations, and other power centers to account. But if the public doesn’t understand how exactly a government or tech company is violating their rights or abusing power in the digital domain or with digital tools, then it will have no basis on which to act.

Artificial intelligence, or AI, stands to compound this problem. Many AI systems are “black boxes” in which the training data, inputs, or operations aren’t visible to users or outside parties, such as regulators or civil society watchdogs. The problem doesn’t end there. As AI becomes more advanced, it may not be just difficult to make sense of what the machine is doing, it may be next to impossible.

A final challenge is the novel properties of digital technology, and the political economy that springs from that novelty. It is tempting to apply analogs from the physical world and to repurpose them for governance purposes. Existing regulatory standards and concepts that draw on common law or civil law traditions may not neatly apply. Two examples are intermediary liability and the consumer welfare standard for antitrust regulation. Moreover, many governing models and frameworks are predicated on material, territory-bound concepts, such as sovereignty, national identity, and citizenship. The digital domain is a new landscape detached from physical geography. Are legal and economic frameworks based on existing conceptions of sovereignty, ownership, and citizenship applicable in such a non-physical space where borders are porous or nonexistent and people can adopt multiple identities?

How Governing the Digital Wild Is Getting Done Today

With these challenges in mind, workshop participants examined different instruments—whether initiatives, platform designs, networks, legislation, voluntary agreements, organizations, or others—that employ democratic means to govern digital technology and spaces. We identified three approaches to democratic governance—the gatekeeper model, the public participation model, and the balance-of-power model—and we examined how each informs the other.

Governments in Open Societies

Democratically elected governments have become more active in regulating the activity of commercial actors in and on the internet over the last decade. This is true at the subnational, national, and international levels. Generally speaking, governments take two approaches: regulation (hard law) and creating voluntary standards (soft law). When it comes to the former, many jurisdictions are enacting regulations in line with the gatekeeper model of democratic governance.

The most common form involves imposing some form of liability on internet companies for the content that appears on their platforms or websites. Several national governments have passed legislation to this effect, including Germany, the United Kingdom, and India. Subnational governments have also introduced measures. California passed a law requiring social media companies to publicly post their content moderation policies and report enforcement data to the state’s attorney general. The most far-reaching and comprehensive of these is the European Union’s Digital Services Act, which establishes obligations for commercial actors to enhance transparency, curb illegal content, and regulate advertising to create responsible platforms free from manipulative design.

Another hard-law approach is based on the balance of power model of democratic governance. Antitrust or competition regulation, which aims to use the power of government to diminish concentrations of private power, is an example of this. Again, the European Union has gone the furthest, using the size and heft of the European market to check anti-competitive and monopolistic practices by tech companies. The Digital Markets Act, adopted in 2022 alongside the Digital Services Act, aims to explicitly reduce the gatekeeping power of large online platform companies.

Using soft law approaches, governments create guidelines, codes of conduct, and other standards for behavior that tech companies voluntarily agree to uphold. In a sense, such approaches could be considered in line with the participatory model of democratic governance, whereby companies are viewed as citizens in a larger polity and invited to act in a way that furthers the greater interests of the whole. By way of example, the UN Guiding Principles on Business and Human Rights (UNGPs) and UN Special Rapporteur David Kaye’s guidelines on content regulation offer companies frameworks with which to inform their decision making.

Similarly, in 2018, the EU introduced the Code of Practice on Disinformation, voluntary commitments to reduce online disinformation among signatory firms. The Code, last updated in 2022, has 34 signatories, including major digital platform companies such as Google, Meta, and Twitter. In 2022, the White House Office of Science and Technology Policy issued an AI Bill of Rights, identifying guidance for the design and use of AI systems. At a minimum, these guidelines instruct companies on how to act in more rights-respecting and democratic ways. The Global Network Initiative, for instance, through its multi-stakeholder processes, has successfully pushed many social media companies to create human rights teams and integrate human rights principles consistent with the UNGPs into their decision-making. But the problem with soft law mechanisms is that some rely on the good faith of company signatories, which often operate under the perverse incentive structures created by market competition. And as some commentators have argued, companies cannot be expected to meaningfully regulate themselves—especially in ways that cut into profits or that undermine core business models.

The hard regulatory approach also has downsides. Workshop participants noted that it can unnecessarily consume company time and resources and stifle innovation. It can have other unintended side effects, as well, such as undermining more flexible and agile forms of self-regulation better able to respond to the rapid pace of change in the digital realm. And, uncoordinated laws by different governments can create a global labyrinth difficult to navigate, particularly for smaller transnational companies.

Companies in a Globalized World

There are two main ways that tech companies exercise some semblance of democratic governance over the digital domain. The first is by creating governing bodies or mechanisms that approximate and borrow democratic principles and practices, and the second is in the design of platforms and applications themselves. For example, the Facebook Oversight Board, a panel of former political leaders, activists, and journalists that is paid by an independent nonprofit foundation, rules on appeals of content moderation decisions made by Facebook’s parent company, Meta.

The Oversight Board operates in an environment where it strives to be responsive to billions of users all over the world. It uses the UNGPs and UN Special Rapporteur David Kaye’s guidelines on content regulation as the framework for content moderation decisions. It applies established tests to determine the applicability and fairness of content moderation standards: a legality test (is the ruling clear and compatible with rulings by such bodies as the UN Human Rights Council?), a legitimacy test (does the ruling reflect the public interest, not a corporate interest?), and a necessity test (is it using the least intrusive tools?). As such, the Oversight Board offers redress through more democratic governance mechanisms than opaque or inconsistently justified procedures at other platform companies, such as at Twitter.

At a glance, the Oversight Board is a typical democratic gatekeeper. But as workshop participant Swati Srivastava has noted, instead of elected officials who act with the consent of the governed, the gatekeeper is selected by an unelected, relatively unaccountable corporation. Moreover, it makes decisions about the limits of free speech without delegated authority from national sovereigns. In its exercise of authority over global populations, the Facebook Oversight Board engages in what Srivastava calls “private polity-making.”

Companies also affect democratic processes through the choices they make about the design of their platforms. In many cases, companies exercise governance by algorithm, by which algorithmic parameters shape the user’s experience. For democratic governance, the central question is, does the architecture of the platform gather people in such a way that they understand themselves to be part of a community in which they have opportunities for meaningful participation?

Two examples illustrate the contrast between design that enables such participation and design that does not. First is Facebook, an app with an individual-centered interface that discourages users from seeing themselves as part of a democratic public. The structure of the Facebook social network is dyadic, meaning the user connects with friends one-by-one, building a community out of discrete links. Groups are closed and private. The algorithms driving each users’ Newsfeed select for homophily, not diversity. The space itself is also homogenous, designed by Facebook with no opportunity for users to shape it. Overall, the user on Facebook is encouraged to see herself as an atomized individual.

A contrasting example is Reddit, an online platform that starts with the community. A user on Reddit does not connect to other individual profiles, but instead joins open, topical communities called “subreddits” where strangers share information and opinions related to a particular topic—politics, architecture, professional basketball, houseplant care, and countless others. Each sub-Reddit is governed according to norms that are created and enforced by volunteer moderators who themselves are users. Anyone can provide input, and engage in deliberation, discussion, and decision-making. Users have even been able to influence the corporate policies of the company itself. The overall experience is one that encourages participatory democratic engagement among users.

Civil Society Amid Democratic Backsliding

Not all governance is about making and enforcing rules. Championing issues, shaping agendas, educating stakeholders, and monitoring commitments are all part of governance processes and express different democratic commitments. It is in these ways that civil society organizations contribute to democratic digital governance by both encouraging public participation and drawing citizens together to influence both governments and companies. Increasingly, however, they are doing so amid democratic backsliding and the use of technology to crackdown on challenges to the elite status quo.

There are dozens if not hundreds of civil society initiatives committed to fostering an open, secure, innovative digital domain. Ranking Digital Rights evaluates the practices of the world’s most powerful tech and telecom companies and their effects on human rights. Their research and scorecards provide important tools for monitoring company behavior on the basis of democratic norms. The Public Interest Technology project encourages digital literacy, enabling young people especially to play a greater role in ensuring technology serves the public interest. C Minds promotes digital policies, ethical frameworks, and initiatives in Latin America. The Digital Impact and Governance Initiative works to catalyze solutions for digital public infrastructure. Their identification of people-friendly protocols and standards fosters innovations that increase participation and also uphold democratic norms. Citizen Lab shines a light on digital surveillance and repression, generating capacity to counter abuses of power. ICT4Peace undertakes policy research and develops guidance tools to shed light on challenges posed by ICTs as well as their use for peaceful purposes. The Open Technology Institute connects researchers, organizers, and innovators across a broad array of concerns to foster equitable access to open and secure digital technology, thus enhancing participatory and norm-respecting action.

Civil society organizations can also help address global power imbalances. Rich nations in the Global North dominate the governance of digital technology. They have the most influence in international governing bodies, and they tend to define the frameworks and goals of digital governance. Poorer countries in the Global South, as well as historically marginalized and disenfranchised populations, often have less voice and influence. They also face different impacts from technology and might have different governance goals. Civil society organizations that represent Global South concerns can bring greater attention to priorities and risks such as environmental sustainability, accessibility, and various forms of bias.

Multi-Stakeholder Initiatives in Multilateral Environment

Lastly, multi-stakeholder initiatives (MSIs) bring together governments, companies, and civil society organizations for digital governance. These emerged as the primary governance form for the internet, given its genesis as a decentralized collaboration, in which all these actors played a role. Perhaps the best-known and most consequential MSI is the Internet Corporation for Assigned Names and Numbers (ICANN), a nonprofit whose mission is “to ensure the stable and secure operation of the Internet's unique identifier systems,” such as the domain name system (DNS), without which the internet as we know it would cease to function.

The structure of ICANN has shifted over time. It was established in 1998 when the U.S. Department of Commerce delegated responsibility to it for managing the DNS and keeping the internet running smoothly in a "bottom up, consensus-driven, democratic manner". In the wake of the Snowden revelations around the U.S. National Security Administration’s surveillance activities, the Obama Administration allowed US government supervision of ICANN to expire. Governments still participate as stakeholders, but ICANN is now incorporated as a financially independent California-based nonprofit that includes businesses, nongovernmental organizations, and academics working alongside governments.

Though many workshop participants agreed that ICANN was successful, there was less consensus about its democratic qualities. Some thought ICANN’s capacity to keep the internet functioning and its narrow focus on technical issues minimized its susceptibility to politicization, even though governments are participants. Others pointed out that the ICANN we see now is very different from the body established in the 1990s and that the libertarian spirit behind the creation of ICANN might cringe at the role it now plays in the profits of large multinational corporations. ICANN’s success may be less attributable to its abstract qualities and more to its pragmatic capacity to adapt. Its shifts and adjustments have continued to generate deference and legitimacy but also critiques, and its operations do not clearly reflect any of the democratic models we outline.

Another MSI of note is the Global Network Initiative (GNI), an organization composed of a broad range of tech companies, civil society organizations, investors, and academics that seeks to influence governance on the internet. Since 2008, the GNI has developed and published normative frameworks, based in large part on the UNGPs, such as the GNI Principles on Freedom of Expression and Privacy and the Implementation Guidelines that aim to define responsible corporate responses. As such, the GNI plays a "gatekeeper" role, influencing those who police the digital domain. At the same time, its members seek to encourage public participation in and knowledge of digital governance. Workshop participants argued that its success has hinged on how it has evolved in response to regulatory developments—similar to ICANN. Also important has been its ability to focus on relevant issues and the willingness of funders (both members and non-members) to value the space it creates for information sharing, trust building, and collaboration.

A similar but narrower initiative is the Digital Trust and Safety Partnership (DTSP), which focuses on advancing content-agnostic best practices in trust and safety. It takes inspiration from industry frameworks and international management standards, such as those developed through the International Organization for Standardization (ISO). It also relies on a gatekeeper logic of democracy seeking to both align stakeholders around practices that manage digital harms and bring policies in line with international human rights law. It engages stakeholders through participatory processes to generate a framework for avoiding conflicting regulatory regimes that pose risks to both innovation and human rights.

Other initiatives aim to affect governance on the internet by fostering democratic deliberation. These include the UN’s Internet Governance Forum (IGF) and the Geneva Internet Platform (GIP). These broad initiatives are helpful in bringing together stakeholders and facilitating open discussions. They thus contribute to participatory engagement that may lead to governance initiatives by others.

Takeaways and Implications

Several key insights from the workshop carry implications for advancing democratic principles, approaches, and practices in the digital realm.

It is useful to draw on pre-existing frameworks and analogs from the physical world, but buyers should beware. Though the novelty of the digital world makes it tempting to start from scratch, effective governance is often rooted in existing legal and human rights frameworks and norms. As the scholars Martha Finnemore and Duncan Hollis note, norms are “social creatures that grow out of specific contexts via social processes and interactions among particular groups of actors.” Norms for cybersecurity, algorithmic decision-making, and other aspects of the digital domain are more likely to gain traction if they emerge from frameworks that are already there. Still, there are some models that simply won’t map neatly onto a rapidly changing domain that has few physical boundaries and that consists of distributed systems. That means we’ll have to think more deeply about what we mean when we’re talking about sovereignty, citizenship, and identity in the digital realm.

Digital democratic governance will benefit from greater interaction among different instruments. Just as the internet is a network of networks, so too should the global regime governing it be networked. We can learn from how existing democratic governance instruments already interact and may be well served by identifying opportunities for greater connection and learning among them. One scholarly analysis of the highly fragmented AI global governance regime found that greater centralization would make the system more efficient and politically powerful, but that locking in a centralized architecture that was inadequate would be even worse than fragmentation.

When it comes to regulation, one size does not fit all and blended, process-based approaches are often more productive. Sensitivity to context is important for digital governance. Rigid, blanket regulations can create unintended consequences. A focus on processes rather than output-based prescriptions can be more productive. For instance, when it comes to content moderation decisions, specifying procedures and principles is more nuanced and effective than blanket restrictions on certain types of content. Similarly, soft law can generate buy-in and then can be hardened using available instruments in different locales. When the EU brought the Digital Services Act into force, for instance, some of its requirements were already present in the voluntary Code of Practice on Disinformation, so many companies were already aware of their obligations under the regulation.

Neither private companies nor states are unitary actors. Large institutions, both private and public, are not monoliths. In a scholarly analysis of Facebook (now Meta), Chinmayi Arun examined the ways in which the company engaged differently with different external actors and how teams in the company had divergent, sometimes competing priorities. Those who regulate or otherwise govern large institutions should be aware of how internal politics can shape adoption and implementation of regulations and practices. Multi-stakeholder initiatives can capitalize on these differences to create alliances across subgroups in governments and companies more attentive to democratic practices.

Tensions are inevitable. The major objectives of digital governance—security, innovation, access, and human rights—can be in conflict with one another. Protecting human rights might curb innovation; strengthening security can limit access; and so on. And the gatekeeping, participation, and power-balancing approaches can also conflict. Policing democratic norms may weaken opportunities for participation, for instance. Though unsettling, such tensions are unavoidable. Charting processes and mechanisms that help mediate these tensions in particular settings will be important for identifying paths forward.