What We Can Learn From China’s Proposed AI Regulations

Article/Op-Ed in VentureBeat
Oct. 3, 2021

Spandana Singh wrote for VentureBeat about how the draft artificial intelligence regulations the Cyberspace Administration of China proposed in August can be considered a mixed bag of pros and cons, promoting more user privacy controls and transparency in algorithmic recommender systems while also paving the way for increased government control of online speech.

The CAC’s proposal does contain numerous provisions that reflect widely supported principles in the algorithmic accountability space, many of which my organization, the Open Technology Institute has promoted. For example, the guidelines would require companies to provide users with more transparency around how their recommendation algorithms operate, including information on when a company’s recommender systems are being used, and the core “principles, intentions, and operation mechanisms” of the system. Companies would also need to audit their algorithms, including the models, training data, and outputs, on a regular basis under the proposal. In terms of user rights, companies must allow users to determine if and how the company uses their data to develop and operate recommender systems. Additionally, companies must give users the option to turn off algorithmic recommendations or opt out of receiving profile-based recommendations. Further, if a Chinese user believes that a platform’s recommender algorithm has had a profound impact on their rights, they can request that a platform provide an explanation of its decision to the user. The user can also demand that the company make improvements to the algorithm. However, it is unclear how these provisions will be enforced in practice.
Although the CAC’s proposal contains some positive provisions, it also includes components that would expand the Chinese government’s control over how platforms design their algorithms, which is extremely problematic. The draft guidelines state that companies deploying recommender algorithms must comply with an ethical business code, which would require companies to comply with “mainstream values” and use their recommender systems to “cultivate positive energy.” Over the past several months, the Chinese government has initiated a culture war against the country’s “chaotic” online fan club culture, noting that the country needed to create a “healthy,” “masculine,” and “people-oriented” culture. The ethical business code companies must comply with could therefore be used to influence, and perhaps restrict, which values and metrics platform recommender systems can prioritize and help the government reshape online culture through their lens of censorship.
Related Topics
Transparency Reporting Content Moderation Algorithmic Decision-Making Data Privacy