AI for the People, By the People

Article In The Thread
A handshake between a human hand and robot hand on a teal background.
New America / SvetaZi on Shutterstock
Oct. 10, 2023

Americans find themselves surrounded by artificial intelligence (AI) across all facets of life: AI serves as customer service representatives, writes emails, generates new personas, and more. As the adoption of AI grows, so too does the concern and excitement around the controversial technology. AI raises complex ethical, economic, and privacy implications that prompt the question of governance over this family of emerging technologies.

If we want AI to be anchored in the public interest — and designed to serve all residents — we’re doing it wrong, as Kat Knight argues. Instead of a focus on what AI can do, we should be asking what humans actually need it to do, and govern it accordingly. That will require more than rules set by private companies and existing public agencies. It needs to bring the people who will be affected by AI into the debate through participatory structures that engage residents, civil society, government, and philanthropy. In the AI revolution, the “missing dimension,” as Harvard computer scientist Barbara Grosz argues, is people and society.

Despite the interdisciplinary nature of AI and emerging technologies, governance decisions have often remained in the hands of private companies or governmental bodies. More recently, labor unions and professional associations — including those that protect actors and writers, pilots, and doctors — are fighting to be heard in discussions around how AI is implemented in their industries. Currently there remains no universally accepted model for regulating AI, and the careless adoptions of AI, with little regard for its potential harms, is connected to the lack of civic participation in the design and deployment. We already have tools and successful models for engaging residents in decision-making. Given the well-documented substantive harm from AI, including racial bias in facial recognition softwares and the proliferation of deepfakes, the current moment is ripe to deploy these models to address the most urgent questions related to AI and its governance.

One emerging model for governance of this technological revolution that would involve communities is citizens’ assemblies, in which a randomly selected representative subset of the relevant population assists in decision-making for the community. Forms of this collaborative governance model have already popped up across the United States and are a well-established method internationally. In Petaluma, California, a citizens’ assembly brought together 36 lottery-selected citizens over a three-month span to determine the future of a piece of the city’s public land.

Similar frameworks have already been tested in an AI context, providing citizens with the opportunity to gain a clear understanding of the technology’s impacts and benefits. In Colombia, the government adopted a collaborative governance approach to design their Ethical Framework for Artificial Intelligence. The process involved hosting several roundtables around the first draft of the document, allowing diverse sectors of society to share their recommendations. International NGOs, students, industry representatives, and more were represented in these discussions — and their conclusions were implemented into the framework.

Similarly, participatory budgeting is a global practice in community-level decision-making, which started in Porto Alegre, Brazil, and now has spread across the globe. In the United States, many cities are leveraging participatory budgeting, including with federal dollars and through work with the Participatory Budgeting Project. In late 2022, for example, Mayor Justin M. Bibb of Cleveland, Ohio, announced the city’s priority of creating a Civic Participation Fund for community residents to have a voice in how millions of dollars are spent throughout the city.

One of the key elements of participatory budgeting is that it provides hands-on civic learning for residents on making real tradeoffs between policy decisions. This is a critical example of how participatory budgeting could be used as a paradigmatic tool for AI governance. For example, one could imagine a participatory process for allocating funds for AI implementation at the intersection of the people and democracy — requiring tough tradeoff decisions, while also offering a genuine channel for building civic voice and civic power.

The questions surrounding AI today are human questions, not just technology questions. This is why Helene Landmore, Andrew Sorota, and Audrey Tang envision a Global North-South citizen assembly to govern AI. By implementing citizen voices into the process, this new model of governance becomes “not only key to reigning in AI — it will also set an important precedent for how to manage other twenty-first-century issues such as climate justice.”

Overall, these structures have a few key components that make them well suited for engagement around critical AI governance challenges. First, they are genuinely engaging community members to have a seat at the table — including those traditionally underserved. Second, they involve real policy tradeoffs and offer opportunities for citizens to learn how policy gets made and how different stakeholders might be impacted. Finally, they are tied to decision-making and policy outcomes. It is not enough to simply bring people together for the sake of discussion. Instead, we need processes that include residents’ input more directly into feedback loops for decision-making.

Given the large-scale impact of AI across every domain of our private and public lives, now is the moment to engage in the type of participatory and collaborative governance structures that will enable more legitimate, inclusive, and equitable decision-making. By engaging the public in these processes, there is the opportunity to garner civic knowledge about the genuine tradeoffs, elevate the public narrative around AI, and mobilize a diverse group of stakeholders from across sectors. With the rapid rate that new developments in AI are being created, it is crucial we engage a civic-led response that allows AI to work for the people rather than against.

You May Also Like

Governing the Digital Future (Future Frontlines, 2023): A new report analyzes the global dynamics of power and governance in the digital domain.

In the AI Age, Data Literacy Should Be a Human Right (Plantery Politics, 2023): We must prioritize literacy as a basic cornerstone of functioning in human society, as we navigate our digital realities and leverage emerging technologies.

Revitalizing Civic Engagement through Collaborative Governance (Political Reform, 2022): This report provides in-depth case studies of co-governance models from local organizations and city governments around the U.S.


Follow The Thread! Subscribe to The Thread monthly newsletter to get the latest in policy, equity, and culture in your inbox the first Tuesday of each month.