Four Specialists Describe Their Diverse Approaches to China's AI Development

From All Sides, With Jeffrey Ding, Maya Wang, Paul Scharre, and Danit Gal
Blog Post
Denys Nevozhai / Unsplash
Jan. 30, 2020

This article was published as part of the Stanford-New America DigiChina Project's first special report, AI Policy and China: Realities of State-Led Development. Read this article and the others in PDF here.

Like “artificial intelligence,” a broad concept that engages numerous existing and so-far imagined technological, industrial, and social phenomena, the extended community of people around the world who study AI and the Chinese context is diverse. Specialists from a wide range of fields, previously focused on China or not, have found important events at play in the country’s experience with machine learning, advanced automation, and data-driven technology.

In order to illuminate this diversity of interests at play, DigiChina reached out to four researchers who have engaged with China, policy, and AI from different perspectives. Jeffrey Ding has spent countless hours scouring, translating, and analyzing Chinese writings on AI and analyzing the intersection of nationality, technological reality, and politics. Maya Wang has published some of the world’s most illuminating documentation of the ways AI can be employed in authoritarian politics and surveillance technology. Paul Scharre engages from a military and national security perspective, in which Chinese and U.S. military modernization efforts raise tough bilateral, international, and ethical dilemmas. Finally, Danit Gal is deeply engaged with Chinese and East Asian efforts to think through the ethics and governance of AI, observing how national and institutional factors play out in public and industry discourse.

The writer and scholar Johanna Costigan interviewed all four, and their conversations were edited for length and readability. Affiliations are as of October 2019. –Ed.

JEFFREY DING, University of Oxford

Jeffery Ding is a Rhodes Scholar and Ph.D. candidate in International Relations at Oxford, but might be better known for his “(sometimes) weekly” ChinAI Newsletter featuring translations of Chinese-language tech- and AI-centered texts. See his DigiChina contributions.

Describe your research focus related to China and AI.

I have always been interested in U.S.–China relations. I was born in Shanghai, and moved to Iowa City when I was three, and got interested in the relationship through high school debate and IR issues. At Oxford, I kept coming across documents in Chinese media that were just not being picked up on. I realized that there’s a huge gap in translating documents; DigiChina is trying to fill that gap, but it also extends into the informal analysis. And who are the DigiChina equivalents in China?

To what extent do you think market forces in China will continue to push forward AI in a way they couldn’t with biotech, and what consequences does that have for the CCP?

There are some areas where it might be justified to at least consider government intervention. Europe has no social media giants, because U.S. tech firms just dominated, and it’s very hard to displace the cumulative gains of the leader. In that sense you could say the Great Firewall’s economic protectionism was actually key to allowing China to have a competitive social media industry. You don’t want the market to completely dominate, because especially in the global market, the existing leaders will continue to prevail.

Do you think other researchers or commentators sometimes miss the data privacy debate within China? For example, we’ve heard a lot of people saying Chinese people don’t care about privacy.

Since I've been covering it for the past two years, there’s definitely been a trend of more discussions about data privacy. Polls have come out saying that the great majority of Chinese internet users are concerned about AI and privacy. Some of it is lost in translation in the sense that the concept of privacy is pretty malleable and can mean different things. In the Chinese context, privacy protections are solely viewed in terms of data security, meaning companies don’t lose your data. I do think there's a tendency in the West, because China is a place where censorship reigns and you have an authoritarian government, to just think that Chinese people are willing conspirators. I do think there’s a dehumanizing component to this rhetoric.

How does China’s AI approach compare to other leaders in the field?

My core argument is that no one does AI evaluation well, because national AI capability is such a fuzzy concept. A slice of Beijing where it’s super high-tech might be much more advanced than a slice of Iowa. I looked at input and output—patents, publications, talent numbers. We also have to look at different aspects of the AI value chain. Sometimes we only see the sexy product applications, but there’s also the technology layer and the foundational layer; it’s Google and Facebook building fundamental architectures. When comparing different countries’ AI abilities, it’s probably more useful to clearly specify what you’re trying to compare.

Are there any particular Chinese texts that people should pay attention to?

Probably 50% of what I translate are these new science and tech media platforms that mostly push their articles out on WeChat. There are about 10 of these that cover AI as one of their main areas of focus, so that’s definitely a trend that I think more people should be aware of.

MAYA WANG, Human Rights Watch

Maya Wang is a China Senior Researcher at Human Rights Watch, where she investigates issues including China’s social credit system, protests, surveillance, and more. She is currently based in Hong Kong.

How would you describe your research focus?

I cover a range of human rights issues, from the use of torture to Xinjiang to Hong Kong to mass surveillance. We focus on different areas responding to the situation on the ground. Three or four years ago, I started to be informed about the social credit system by activists. That threat remains in the background, and I’m still interested, but other means of surveillance were also present in Xinjiang forming an interest in mass surveillance.

Would you say the extreme methods described in your HRW report “China’s Algorithms of Repression” are indicative of the CCP’s paranoia toward losing control of China's people, a particular bias directed at Uyghurs based on a combination of discrimination and reactions to the riots in 2008, or evidence that the government hopes to expand these practices beyond Xinjiang and is using Uyghurs there as a particularly extreme test case?

A bit of a combination. The use of low-tech mass surveillance has been a part of the CCP since it was a party. They set up systems like hukou [tying people’s privileges and obligations to a hard-to-change locality of registration], danwei [work units that can shape far more than a person’s job], and the dang’an political file system. These were old fashioned ways of controlling people, and when the party transitioned to a market-based economy in 1979, it quickly realized it posed a problem, because people were working in private companies which they have no control of.

So the rhetoric becomes extreme after that point, and they started using technology to augment control. The mechanisms were built over time, but the motivation was very similar. In 2000 the Chinese government enlisted foreigners’ help to instate the Golden Shield project [also known as the Great Firewall].

Western critics often describe the crisis in Xinjiang as an instance in which the Chinese government is using surveillance tactics “against its own people.” What nuance would you add to that assessment given the ethnic and cultural distinctiveness of Uyghur people? And what are some troubling examples of surveillance technologies targeted at all Chinese citizens regardless of ethnicity?

First of all, Xinjiang is an important example of how the human future could possibly look. It’s not limited to that part of China or even China itself. You already see the collection of biometrics being used in other countries, including and in particular in the United States, where laws have not caught up with the technology. That collection is being centralized and used in violation of human rights, particularly the most vulnerable populations. Recent news would suggest the Trump administration wants to target immigrants via collection of DNA, and through big data, to very invasively trace people’s movements.

In Xinjiang, the targeting of minorities and then the spread of these methodologies to the majority is concerning for all of us. The way the CCP is targeting Xinjiang is offering a new model of social control. It is not a one-size-fits all model, which the CCP and all oppressive governments tend to do. The collection of biometrics and real time monitoring, while allowing some life to happen, ensures that there is a greater system of punishment and reward, to make sure that those who are thinking undesirable thoughts against the government are controlled in a more extreme manner.

In the rest of China, we have documented the “police cloud,” which has some similarities to the [Xinjiang-focused] Integrated Joint Operations Program program, though they are less intrusive. The cloud also tracks and predicts dissent and involves the mass collection of DNA of ordinary people not connected to crimes and other biometrics.

Is there any validity in Chinese officials’ demands that the United States should stop “interfering with the internal affairs of China”? Even if we could get it back, would a highly involved America be the best approach to curb these behaviors?

International human rights standards are for all governments and all people everywhere. The Chinese Constitution protects religious freedoms and expression. The Chinese government’s argument that criticism of human rights is interference in domestic policy is one of those convenient arguments used to silence the world’s criticisms. They have no validity at all, and what’s more concerning is no governments are taking severe actions against what’s happening there. The U.S. recently blacklisted 28 entities that are contributing to human rights abuses, but they need to do more through implementations of the Magnitsky Act. We have too little leadership in the world today standing up for these rights.

Are there any particular Chinese texts or sources that people should be paying attention to?

For my own research, I read a lot of police accounts and technology companies’ accounts on Weibo or WeChat. I read a lot of government documents as well, and there is a lot of material they just put online; a lot of information is publicly available.

PAUL SCHARRE, Center for a New American Security

Paul Scharre is a Senior Fellow and Director of the Technology and National Security Program at the Center for a New American Security. Previously, he worked in the Office of the Secretary of Defense, where he played a key role in establishing policies on autonomous systems and emerging weapons technologies. He is the author of Army of None: Autonomous Weapons and the Future of War.

Talk a little bit about your career path and how you got interested in AI.

I got interested in military robotics when I was overseas in Iraq. I remember a very clear moment when I came across an IED. We discovered it before it exploded, and we had an explosives team come out to take care of it. I was expecting someone in a big bomb suit like you’d see in the movie “The Hurt Locker,” and instead it was a robot. I worked on military robotics issues at the Pentagon after I left the Army. And one question that kept coming up was the role of autonomy in weapons systems.

In your view, is the “centaur” approach (the ability to “successfully marry human and machine intelligence into joint cognitive systems” as you define it in a Fall 2018 Foreign Policy piece) ideal? What is the best possible outcome of developing automated weapons?

It’s an optimal way to combine the benefits of both human and machine decision-making, which have different attributes and advantages in different settings. One of the challenges is how do you build joint cognitive architectures that combine the benefits of the speed and reliability and precision of machines with the broad and more flexible capacities of humans? Humans can apply judgment and context, which machines can’t do today.

There are advantages to using machine intelligence in warfare, from the standpoint of reducing humanitarian harm and complying with standards.

There’s often an assumption that fully autonomous weapons with no human involvement would be better from a military effectiveness standpoint. But the most effective militaries will be those that combine human and machine decision-making on the battlefield. The challenge is that the optimal blend of the two is going to change over time. We do not appear at this time to be anywhere near the sci-fi ability of AI. There are good reasons to think we want humans involved in these decisions for quite some time.

In the same article, you write about the potential catastrophe of humans “ceding effective control over what happens in war” and compare it to the power of algorithms controlling the stock market. It seems clear that practitioners in the field of finance have been willing to take AI risks. Given the “arms race in speed” you point out, is your assessment that defense and military officials will be as willing?

I think stocks are an interesting comparison; it’s competitive, there’s an advantage in speed, and adversaries are not going to trade intel on how their algorithm functions. It’s an important cautionary tale as militaries look at this technology. Automation introduces risk in novel ways because of its scaling effects, in the number of incidents that might occur as well as speed. You can have an accident that spirals out of control very badly that has a widespread effect in ways that are not possible with people. Human traders would not have been able to make all those mistakes as quickly.

I really do think that defense organizations underestimate the risks of accidents with their own systems and are not adequately prepared for thinking about emerging technologies that might have very dangerous consequences.

If machine learning requires environments that are more stable than war zones, how can we give machines the chance to learn? Should we?

As machine learning systems overall come out of research labs into society, there are all these incidents where they don’t function well in the real world because the training data is not robust enough or doesn’t accurately reflect the situations they are put in. Thankfully, war is very rare. This means we don’t have extensive data sets on what war looks like. For militaries, it’s like training a sports team to play a game once in a generation where the rules are constantly changing and the consequences are life and death.

Giving up on machine learning altogether would give up on significant advantages both in reducing civilian harm and military effectiveness. Machines can make some decisions in the real world and we anticipate some failures, but the autonomy needs to be bounded so that those failures are not catastrophic. The military is an inherently hazardous environment.

In a recent Foreign Affairs article, you point out that China has already begun developing a system of digital authoritarianism, via facial recognition, predictive policing, and other methods. What is the connection between digital authoritarianism and autonomous weaponry? Could one beget or normalize the other?

It’s conceivable that the technologies that would be matured through widespread surveillance like facial recognition could have dual-use applications in military settings, and that’s of course troubling as well. But I'm far more concerned with how authoritarian regimes are directly using the technology, including the lack of a system of checks and balances in place to manage that use.

What other current events are relevant to China and AI development?

China released two position papers at the UN meeting on lethal autonomous weapons and they basically said they endorse a treaty prohibiting the use of lethal autonomous weapons, although not their research and development, which is a significant loophole.

The really interesting thing about it was that they laid out these five key attributes that describe what constitutes an autonomous weapon; there’s nothing that would meet these definitions. There’s been a lot of speculation about whether this is a genuine olive branch toward some kind of arms control or an exercise in lawfare—a strategy to use international law to constrain other actors. Of course, while China is doing this they're engaging in systematic human rights abuses using AI technology. So there is certainly a disconnect.

Are there any particular AI texts that people should pay attention to?

I think that Elsa Kania’s report Battlefield Singularity regarding China and AI is the best thing to read on Chinese developments in military AI.

DANIT GAL, Keio University

Danit Gal is from Israel, but these days she doesn’t stay anywhere for too long. Danit reads, writes, and speaks about topics relating to AI and ethics, particularly in China and the rest of East Asia. She has degrees from the University of Oxford and Yenching Academy, and travels around the world engaging with people on tough questions surrounding responsible AI development and implementation.

Talk about how you got here, your path toward China and AI research, and your focus now.

I was headed towards a DPhil in cybersecurity at Oxford, and narrowly escaped it—packed my bags and moved to China for the Yenching Academy at Peking University. I learned Mandarin Chinese at Oxford during my master’s at the OII [Oxford Internet Institute], but it was very clean Mandarin. It took me a good while after getting to Beijing to understand the er [a key sound in the capital’s local accent -Ed.] worked like magic.

I was involved with Tencent from the start, and in Beijing I started mingling with other companies like Baidu, Alibaba, and others. Being affiliated with both Peking and Tsinghua universities was a valuable asset that allowed me to reach out to and engage many companies. AI was the natural trajectory since everyone was so excited about it, and 2016 was a good time to dive straight into it before the market exploded with hype. Right now, I’m focused on understanding how AI ethics and governance play out on the national level of key AI actors, and to that end I’m trying to better familiarize myself with the complex landscape of countries I consider key for the future of AI.

Your work centers on the relationship between a country’s cultural context and AI. In the case of China, what is one cultural misunderstanding you’ve observed that leads to an inaccurate assessment of AI policy or planning?

When people read the New Generation Artificial Intelligence Development Plan (AIDP), they have a reaction like “this is so terrifying, China wants to control the world.” This is often linked-up with Putin’s mis-contextualized comment on how those who control AI will control the world. Maybe—maybe not. In some ways I can see how this policy is perceived as threatening, but the Chinese researchers and practitioners I’ve engaged with have a very different perspective. Some of them seek to develop consciousness simply because they think AI without consciousness is more dangerous, because it doesn’t care about humans, not because they believe this will make them superior to others. People tend to take China’s government policy at face value with a devilish spin because it serves their interests.

In your recent paper “Perspectives and Approaches in AI Ethics: East Asia” you put China’s attitude toward AI, particularly robots, in context with its East Asian neighbors. (AI as a partner in Japan, as a tool in South Korea, and China in-between.) What distinguishes these attitudes and the role that traditions play in forming that conception?

An example of the “tool” categorization would be Google Maps. It’s purely functional. The partner view leans more toward AI that has a humanoid voice or appearance, assumes any kind of human attribute that makes itself more accessible and approachable. This ends up blurring that line. You could call Siri functional AI if it wasn’t the most sexually harassed “woman” in the world. That’s why this spectrum is important, we need to understand how people use AI that was designed to be used as a tool but with the interface of a partner.

South Korea has an interesting approach which I think is spot-on: divide responsibilities between users, developers, and providers. Everyone gets a share of rights and obligations—it’s a human team effort. Japan has a long-standing heritage of partnership with technology, which informs society; The Japanese government’s concept of Society 5.0 promotes co-development and co-existence with AI. China is in the middle of that spectrum.

Can you talk about how Buddhism has contributed to perceptions of AI?

Both China and Japan have a long-standing Buddhist heritage. Both countries have Buddhist robot-monks that help worshipers engage on a deeper level. The key contribution Buddhist beliefs had on the idea of AI as a partner is the belief that everything, living or not, has the potential to become the Buddha by being cultivated to reach enlightenment. It’s one of the most ancient instances of techno-animistic beliefs. This lends inanimate objects a special place in society, and the popularity of robots or AI systems that are designed to resemble humans or animals greatly benefit from that.

On the tool scale you have South Korea with a very strong Confucian heritage. You have a very clear idea of where you belong and what you should aspire towards. This resonates very strongly with the idea of anti-social development—technology that does not interfere with society.

In Japan, Society 5.0 promotes co-development and co-existence. Its Shinto heritage also contributes to deeply-embedded techno-animistic views. When you talk to people about what inspired them, everyone will refer to the same religious influences and popular culture undercurrents. It’s not that every single person will say, “I dream about having a partner in AI,” but when you talk to them, they say, “Why not? I grew up with these ideas and they seem nice.”

Do you believe AI development is an issue that requires global cooperation? Should there be global standards on research and implementation of AI?

There are global actors shaping up to do exactly that, but they have not been successful so far. We have so many actors on the scene claiming that they’re starting international AI policy but not actually doing it. Even if they’re inclusive by name, developed countries have the time and capacity, while developing countries still care about electricity and running water. Most of the time, these global actors have a hard time getting international representation, and even when they do, implementing a wish-list of principles locally proves much more challenging than reaching consensus with 40-plus people. Standards can be a tried-and-tested path for that, but they take time to develop on an international level. Part of my work is doing exactly that with IEEE P7009, and it’s very challenging.

Are there any particular Chinese texts or sources that people should be paying attention to?

Both the East and the West have movies and shows depicting robots as potential love interests. I’d recommend those to observe how culture interacts with policy and practice. People tend to underestimate the importance of culture, but even policymakers are affected by it. If you grow up thinking AI is going to go well for humanity, you tend to be more optimistic about it – it’s that simple. I’d also recommend paying attention to the gender dynamics. If you look at current development trends, you’ll see that every guy gets a robot girl but not every girl gets a robot boy. Humanizing AI doesn’t only make it accessible—and to some of us, scary—it also makes it inherently unequal.