Translation: Chinese AI Alliance Drafts Self-Discipline 'Joint Pledge'

Emphasizing AI ethics, safety, standardization, and engagement, including internationally
Blog Post
June 17, 2019

China's Artificial Intelligence Industry Alliance (AIIA), launched in 2017 and boasting a who's who of top Chinese tech firms and universities as members, released a draft "joint pledge" (公约) on self-discipline in the artificial intelligence (AI) industry on May 31.

The draft, which is open for comment from alliance members and the general public until June 30, could be an important common starting point for Chinese efforts across the broad range of activities understood as AI development. Although its contents are relatively general, the inclusion of phrases such as "secure/safe and controllable" (安全可控) and "self-discipline" position the document to mesh with broader trends in Chinese digital governance.

That alignment is strengthened by the fact that the AIIA appears to be a pseudo-official organization. According to its website, AIIA was launched in October 2017 by a group of institutions led by the Ministry of Industry and Information Technology (MIIT)'s China Academy of Information and Communications Technology (CAICT, see DigiChina profile). The new alliance plays host to working groups on a variety of issues, drawing membership from research institutions and the private sector.

AIIA's membership is divided into regular members (more than 250), council members (more than 125), and council vice-chairs (29). The final category includes the leading tech companies Baidu, Alibaba, Tencent, Huawei, ZTE, 360, etc., plus Tsinghua University, Zhejiang University, and the Harbin Institute of Technology.

It is not clear from this initial release whether all AIIA members would sign on to a final version of this document, but it is certainly one of the most prominent statements on AI ethics that has emerged from China since the 2017 release of the New Generation Artificial Intelligence Development Plan (AIDP, see DigiChina translation), which called for significant attention to policy and ethical aspects of AI development.

[Chinese-language original]

TRANSLATION

Joint Pledge on Artificial Intelligence Industry Self-Discipline (Draft for Comment)

Artificial intelligence is an important driving force for a new round of scientific and technological revolution and industrial transformation, which will bring revolutionary changes to people's production methods and lifestyles. Establish a correct view of artificial intelligence development; clarify the basic principles and operational guides for the development and use of artificial intelligence; help to build an inclusive and shared, fair and orderly development environment; and form a sustainable development model that is safe/secure, trustworthy, rational, and responsible.

As enterprises, universities, research institutes, and industry organizations that research, design, manufacture, operate, and service artificial intelligence; to promote ethics and self-discipline in China's artificial intelligence industry; to guide and standardize the behavior of practitioners; we make the following commitments:

Chapter I: General Provisions

Article 1: Human-oriented. The development of artificial intelligence should uphold basic rights such as human freedom and dignity, follow the principle of human-centeredness, and prevent artificial intelligence from weakening and replacing humanity's position.

Article 2: Enhance well-being. The development of artificial intelligence should advance the progress of society and human civilization, create more intelligent modes of working and lifestyles, and enhance people's livelihood and well-being.

Article 3: Fair and just. The development of artificial intelligence should ensure fairness and justice, avoid bias or discrimination against specific groups or individuals, and avoid placing disadvantaged people in an even more unfavorable position.

Article 4: Avoid harm. The development of artificial intelligence should avoid harming the interests of society and the public; existing dangers should not be aggravated, nor new dangers caused, through the abuse of artificial intelligence.

Chapter II: Principles

Article 5: Secure/safe and controllable. Ensure that AI systems operate securely/safely, reliably, and controllably throughout their lifecycle. Evaluate system security/safety and potential risks, and continuously improve system maturity, robustness, and anti-tampering capabilities. Ensure that the system can be supervised and promptly taken over by humans to avoid the negative effects of loss of system control.

Article 6: Transparent and explainable. Continuously improve the transparency of artificial intelligence systems. Regarding system decision-making processes, data structures, and the intent of system developers and technological implementers: be capable of accurate description, monitoring, and reproduction; and realize explainability, predictability, traceability, and verifiability (可解释、可预测、可追溯和可验证) for algorithmic logic, system decisions, and action outcomes.

Article 7: Protect privacy. Adhere to the principles of legality, legitimacy, and necessity when collecting and using personal information. Strengthen privacy protection for special data subjects such as minors. Strengthen technical methods, ensure data security, and be on guard against risks such as data leaks.

Article 8: Clarify responsibilities. Make clear the rights and obligations at each stage in artificial intelligence research and development (R&D), design, manufacturing, operation, and services, etc., to be able to determine the responsible party promptly when harm occurs. Advocate for relevant enterprises and organizations to innovate in insurance mechanisms under the existing legal framework, to distribute the social risks brought about by development of the artificial intelligence industry.

Article 9: Diversity and inclusivity. Promote the inclusiveness, diversity, and universality (普惠性) of artificial intelligence systems. Strengthen cross-domain, interdisciplinary, and cross-border cooperation and exchange, and solidify an artificial intelligence governance consensus. Strive to achieve diversification of R&D personnel and comprehensive training data for artificial intelligence systems. Continually test and validate algorithms, so that they do not discriminate against users based on race, gender, nationality, age, religious beliefs, etc.

Chapter III: Activities

Article 10: Self-discipline and self-governance. Strengthen awareness of corporate social responsibility, integrate ethical principles into all aspects of artificial intelligence–related activities, and implement ethical reviews. Promote industry self-governance, formulate norms of behavior for practitioners, and progressively build and strengthen industry supervision mechanisms.

Article 11: Formulate standards. Actively participate in the formulation of international, national, industry, and organizational standards related to artificial intelligence. Enhance the measurability of ethical principles such as security and controllability, transparency and explainability, privacy protection, and diversity and inclusiveness; and simultaneously build corresponding assessment capabilities.

Article 12: Promote sharing. Encourage open source and open resources such as platforms, tools, data, and science and education; share artificial intelligence development dividends and governance experience; strive to break data islands and platform monopolies; continuously narrow the intelligence gap; and advance the deep integration of artificial intelligence and the real economy.

Article 13: Universal education. Actively participate in universal education on artificial intelligence for the public, morals and ethics education for relevant practitioners, and digital labor skills retraining for personnel whose jobs have been replaced; alleviate public concerns about artificial intelligence technology; raise public awareness about safety and prevention; and actively respond to questions about current and future workforce challenges.

Article 14: Continually push forward. In the process of implementing this Joint Pledge, continually strengthen research on the potential risks of artificial intelligence development, adapt to industry development requirements, and continue to improve.

Chapter IV: Supplementary Provisions

Article 15: The signatory units of this Joint Pledge undertake to abide by laws and regulations and relevant state provisions in carrying out activities related to artificial intelligence development.

Article 16: Encourage enterprises, universities, research institutes, industry organizations, individuals, etc., to jointly practice the contents of the Joint Pledge and accept society's supervision.

Article 17: Support relevant units or organizations to formulate self-discipline norms based on this Joint Pledge for various industries and fields.

Article 18: The China Artificial Intelligence Industry Alliance is responsible for formulating, revising, and interpreting this Joint Pledge.