In China, Planning Towards AI Policy Paralysis

How government plans, combined with political tightening, form a barrier to AI governance
Blog Post
Shutterstock
Jan. 21, 2020

This article was published as part of the Stanford-New America DigiChina Project's first special report, AI Policy and China: Realities of State-Led Development. Read this article and the others in PDF here.

As President Xi Jinping and the Chinese Communist Party (CCP) exert more centralized and ideological control over legal institutions, the challenges of AI deployment across multiple industries and throughout society demand flexible and innovative responses. However, while the government’s top-level plans for AI advancement call for policy adaptations and taking a lead in global regulation, the same plans appear to be shrinking the space for policy innovation even further.

AI, with its broad applications and vague definitions, is proving challenging for legal regimes across the world. In China, the complex dynamics of regulating AI coincide with the CCP’s increased institutional and ideological control over legal institutions and the private sector. This combination is already having negative impacts on the Chinese legal system, particularly its capacity to respond to and regulate AI, because it affects the capacity of China’s institutions to develop and govern.

National Ambitions for AI Amidst Increasingly Centralized Governance

When it comes to understanding China’s bold AI-related declarations and actions, it is important to put them into this institutional context—to look beyond China’s stated ambitions into the more nuanced reality of how “AI” is being described and used within China’s political and legal institutions.

The Chinese system is defined in part by its political and legal centralization. Provincial and local governments, for example, do not pass legislation but rather “implement” laws passed by the National People’s Congress and regulations issued by the State Council. The CCP also utilizes a political ideology that emphasizes its own singular legitimacy and wisdom to govern China. This ideology does not exist only at the top, but rather spreads throughout the various bureaucratic and legal institutions across China. One routine characteristic of this system is the use of overarching plans to drive industrial and other important policy goals, and the headline-catching 2017 New Generation Artificial Intelligence Development Plan (AIDP) and its various local iterations continue this longstanding governance model.

The Chinese system in recent years grew even more centralized, as the CCP has worked to entrench its formal powers over state institutions, including government agencies and courts, as well as greater Chinese society. For example, a growing number of Party, regulatory, and court documents emphasize the “absolute leadership of the Party.” The CCP has also established a number of new Party organizations that are outside the formal state hierarchy and are therefore effectively “extra-legal,” or beyond the control or supervision of law, including the National Security Commission, which reports directly to Xi. The Cyberspace Administration of China, for its part, reports directly to the Party Central Committee.

Government agencies have reportedly responded to increased centralization and ideological control with fear and paralysis. When the correct way forward is unclear, sometimes it seems safer to do nothing at all.

Private companies, particularly tech companies, are also facing increased CCP interference and control. Tactics from buying company shares to requiring the establishment of Party Committees have, in the view of some analysts, allowed the CCP to “quasi-nationalize” private tech companies, transforming them into “state-overseen” enterprises.

This environment of increasing extralegal powers and personalized authority has exacerbated a bureaucratic paradigm that prioritizes political performance and loyalty, even over efficiency. Government agencies have reportedly responded to increased centralization and ideological control with fear and paralysis; when the correct way forward is unclear, sometimes it seems safer to do nothing at all.

China’s plans and stated ambitions for the future of AI are far from exempt from these trends of centralization and political discipline.

Chinese Governance of AI and Its Effect on Politics and Development

At first glance, China’s approach to the governance of AI appears similar to other countries. Other nations, as well as the OECD, have released similar “AI plans” and documents discussing the importance of ethics and principles when it comes to developing and deploying AI.

The difference is in the institutional details. Currently, the real substance of how AI is being governed across the world is not as much in the grand plans and pronouncements but rather in the particulars of how institutions and individuals affect the role of AI in their lives and communities. It is in this context that AI appears to be revealing and potentially exacerbating shortcomings within China’s political and legal institutions.

For example, while China’s 2017 AI plan is no more vague than any other national document discussing AI, it signals not only intent, but political control. It is as much an announcement to the world that China will lead in AI as it is to domestic institutions that the Party will rule AI and the future it is to power.

If you combine the Party’s assertion of control over AI with its tightened ideological control overall, as well as the indeterminate breadth of AI as a concept, such signaling could well exacerbate problems within government institutions. Overemphasis on “controlling” AI and/or “winning” the “AI race” could put further pressure China’s institutions and reduce their regulatory flexibility.

Old-fashioned bureaucratic in-fighting could also stifle government innovation. There were at least fifteen central government agencies involved in drafting the AIDP. At the same time, Chinese government agencies have a history of infighting and competition. Given the complexity of AI as a legal concept and the political impetus to “win” at governing it, how are those institutions supposed to cooperate? Assigning responsibility to committees does not automatically lead to institution building.

Recourse to existing rules won’t cut it, either. There are some laws on the books that govern the use of algorithms. However, many of these laws include idealistic, politically correct language that is difficult to implement. The 2017 Cybersecurity Law, for example, requires that network operators “respect social morality… and bear social responsibility.” As the People’s Daily describes, such language is having trouble shaping behavior in practice.

There is also a general dearth of regulation in a number of industries in which AI is being deployed. In transportation, for example, there is no national law that regulates safety or other key issues related to autonomous vehicles (though there are notices requiring licenses for smart maps in such vehicles and for “internet enabled” cars, which appear to be cars with some internet-accessing features). Local regulations too are lacking (especially compared to the United States, where 40 states have enacted legislation and/or executive orders). “Smart” medical products or mobile medical apps are also largely unregulated so far.

Paths Not Taken and Not Available for AI Governance in China

It is of course impossible to definitely say why there is a dearth of laws. It is possible, however, that the complexity of AI requires regulatory flexibility and institution building and that, currently, the CCP is placing heavy emphasis on increased bureaucratic and ideological control and the expense of flexibility. The desire for more control does not automatically translate into institution building. In the current climate of institutional paralysis, government actors might not have incentives to make potentially risky legal innovations, and they might instead continue to stagnate.

The CCP is currently closing legal spaces across the board while simultaneously emphasizing the importance of governing AI. There is some evidence that political signaling in the AI space is taking precedence over realistic institutional creativity.

While local governments in China have a history of innovation in certain contexts, this dynamic appears to be declining. One unsigned commentary laments that, while information technology was supposed to make life easier for bureaucrats, it appears to instead have added more hurdles and vectors for political risk. AI, being both complex in a way that requires innovation and politically important in a way that requires signaling, is straining individuals within China’s bureaucracies.

One route taken in many countries around the world is less open for China. Since legislatures across the world have generally been slow to respond to the advent of AI, civil society organizations have played a large role in the nascent development of AI governance, since they can act spontaneously as a check on both government and private sector power. The AI Now Institute, for instance, has published several reports on different uses of algorithms and their impact on society. But President Xi has overseen a large crackdown on civil organizations, particularly law-oriented ones. As such, civil society organizations within China lack the ability to fill the governance and public interest gap left by a bureaucracy lagging the development of technology.

Courts within China have recently been institutionally innovative, but innovations generally serve to insulate courts from criticism and political risk rather than increase their authority or capacity to address complex issues. With AI’s breadth and political import, Chinese courts might not only face unprecedented legal challenges in cases that involve AI, they might also face unprecedented political pressure to avoid any chance of hindering the CCP’s plans for AI. The courts are still worth watching, however: The Supreme People’s Court might issue guidance for lower courts in terms of dealing with cases that involve algorithms. If such guidance is issued, it will be important to pay attention to local courts and how they react to cases involving algorithmic decision making.

AI Governance Dilemmas Have Broader Political Effects

The complexity and novelty of governing AI requires space for regulatory flexibility and compliance—the space to make mistakes and experiment. The CCP is currently closing legal spaces across the board while simultaneously emphasizing the importance of governing AI successfully, and there is some evidence that political signaling in the AI space is taking precedence over realistic institutional creativity.

More national drives for local governments to fund AI winners mean more money out the door, and many local governments in China already face potentially unsustainable levels of debt. AI projects are far from guaranteed to break this cycle, as many of the government-funded “AI startups” do not make much use of AI, and many suffer from unsustainable business models.

To understand China’s future AI governance and technological development, one must go beyond the stated principles and ambitions and observe the development (or lack thereof) of institutions. There are signs that the CCP recognizes problems of bureaucratic inaction and is looking for ways to improve local institutional capacity. Whether efforts to meet these challenges will prove successful remains an open question.