Translation: Key Chinese Think Tank's "AI Security White Paper" (Excerpts)

Chinese policy thinking on AI's challenges ranges from cybersecurity to social stability
Blog Post
CAICT / Translated by DigiChina
Feb. 21, 2019

DigiChina has translated excerpts from this "Artificial Intelligence Security White Paper," published by the China Academy for Information and Communications Technology (CAICT), an influential research group under the Ministry of Industry and Information Technology (MIIT) that DigiChina has previously profiled.

Publication of this white paper is accompanied by DigiChina's first online symposium, in which we asked colleagues in the DigiChina network, across the New America Cybersecurity Initiative, and in outside institutions to assess this Chinese vision for of AI and security in the context of global discussions on these important issues.

An important note on translation: For the term we translate as "AI security," the paper uses the term 人工智能安全 (réngōng zhìnéng ānquán) throughout. This term could also be translated as “AI safety," because ānquán carries both "security" and "safety" meanings. We chose "security" to reflect the broad scope of the subject matter discussed, ranging from national security to social stability, but the paper should be understood as also addressing those challenges often discussed under the banner of “AI safety” in the United States.

Translation: Artificial Intelligence Security White Paper [excerpts]

[Complete Chinese-language original]

I. The Security Implications and Systems Architecture of Artificial Intelligence

(1) Basic Theory and Development Process of AI


(2) Security Implications of AI

Because AI can simulate human intelligence and substitute for the human brain, therefore, in each wave of AI development, especially when a technology is emergent, people have been very concerned about the security issues and ethical implications of AI. From 1942, when Asimov proposed the three laws of robotics, to 2017, when Stephen Hawking and Elon Musk participated in the release of the 23 Asilomar Principles, how to make AI more secure and ethical has always been a long-term consideration and constantly deepening question. Currently, with the rapid development of AI technology and industrial breakouts, AI security is receiving more and more attention. On the one hand, the immature nature of AI technology at this stage leads to security risks, including: technical limitations such as algorithm inexplicability and strong data dependencies; and human-induced malicious applications, which may pose a security risk to cyberspace and national society. On the other hand, AI technologies can be applied to the fields of: cybersecurity and public security; perception, prediction, and early warning for information infrastructure and important social and economic operations; and active decision-making response; improving cyber/network protection capabilities and social governance capabilities.

Based on the above analysis, the project team believes that the content of AI security includes: (1) reducing AI immaturity and the security risks malicious applications pose to cyberspace and national society; (2) promoting the deep application of AI in the fields of cybersecurity and public safety; and (3) establishing an AI security management system to ensure the safe and steady development of AI.

(3) An Architecture for AI Security

Based on the understanding of the security implications of AI, the project team proposed an AI security architecture covering three dimensions of security: risks, applications, and management. The three dimensions in the architecture are independent and interdependent. Among them, security risks are the negative impact of AI technology and industry on cyberspace security and national societal security; security applications explore the specific application directions of AI technology in the field of cyber and network information security and social and public security; and security management builds an AI security management system to effectively control AI security risks and actively promote the application of AI technology in the security field.

AI Security System Framework from Chinese CAICT White Paper

1. Security Risks of AI

As a strategic and transformative information technology, AI has introduced new uncertainties into cyberspace security. AI cyberspace security risks include: cybersecurity risks, data security risks, algorithmic security risks, and information security risks.

Cybersecurity risks involve vulnerabilities in network infrastructure and learning frameworks, backdoor security issues, and systemic cybersecurity risks caused by malicious applications of AI technologies.

Data security risks include training data bias in AI systems, unauthorized tampering, and security risks such as the disclosure of private data caused by AI.

Algorithmic security risks correspond to algorithm design and decision-related security issues in the technical layer, as well as security risks such as black-box algorithms and algorithmic model defects.

Information security risks mainly include AI technology applied to information dissemination and information content security issues for smart products and applications

Considering the deep integration of AI and the real economy, its full security risks in cyberspace will be more directly transmitted to society, the economy, and national politics. Therefore, from the overall considerations above, AI security risks also involve societal security risks and national security risks.

Societal security risks refer to the structural unemployment brought about by the application of AI and its industrialization, which will seriously affect ethics and morality and may even cause damage to personal safety.

National security risks refer to the risks to national military security and political system security brought about by risks and hidden dangers from the application of AI in military operations, public opinion, and other fields.

2. Security Applications of AI

Because of its outstanding data analysis, knowledge extraction, autonomous learning, intelligent decision-making, automatic control, and other capabilities, AI can have many innovative applications in network information security and societal public security fields including network protection, data management, information censorship, intelligent security, financial risk control, and public opinion monitoring.

Network protection (网络防护) applications refer to the research and development of technologies and products to use AI algorithms for intrusion detection, malware detection, security situational awareness, and threat early warning, etc.

Data management applications refer to the use of AI technologies to achieve data protection objectives such as hierarchical classification, leak prevention, and leak traceability.

Information censorship applications refer to the use of AI technology to assist humans in undertaking rapid review of various forms of expression and a large volume of harmful network content.

Smart security applications refer to the use of AI technology to upgrade the security field from passive defense toward the intelligent direction, developing of active judgment and timely early warning.

Financial risk control applications refer to the use of AI technology to improve the efficiency and accuracy of credit evaluation, risk control, etc., and assisting government departments in the regulation of financial transactions.

Public opinion monitoring applications refer to the use of AI technology to strengthen national online public opinion monitoring capabilities, improve social governance capabilities, and ensure national security.

3. AI Security Management

Combining AI’s security risks and its applications in the field of cyberspace security, the project team proposed AI security management ideas that include six aspects: laws, regulations, and policies; standards and specifications; technological methods; security assessments; talent corps; and controllable ecology. Achieve effective control over AI security risks; actively promote the overall objectives for AI technology in the security domain.

With regard to regulations and policies, establish and strengthen corresponding safety management laws and regulations and management policies for key application domains of AI and prominent security risks.

With regard to standards and specifications, complete the formulation of international, domestic, and industry standards for AI security requirements and security assessments and evaluations.

With regard to technological methods, build technological support capabilities for security management, such as AI security risk monitoring and early warning, situational awareness, and emergency response.

With regard to security assessment, accelerate the research and development of indicators, methods, tools, and platforms for the evaluation of AI security assessments, and build third-party security assessment and evaluation capabilities.

With regard to the talent corps, increase the education and training of AI talent, form a stable talent supply and an sufficient talent pool, and promote the secure and sustainable development of AI.

With regard to controllable ecology, strengthen research and inputs at bottlenecks in the AI industrial ecology, enhance the self-guiding capability of the industrial ecology, and guarantee the secure and controllable development of AI.


V. Suggestions for AI Security Development

At present, China's national security and international competition situation is increasingly complicated, and it must be globally oriented. Address AI security at the national strategic level; engage in system layout and active planning; and insist on accelerating technology and application innovation as the main line, and on improving legal and ethical standards as a safeguard. With regulatory norms acting as traction, vigorously promote standards construction, industry coordination, personnel training, international exchanges, publicity and education, etc. Comprehensively enhance the China's AI security capabilities; firmly grasp the strategic initiative of international competition in the new stage of AI development; foster competitiveness in the new advantages in the development of this new space; and effectively guarantee China's cyberspace security and stable economic and social development.

(1) Strengthen Indigenous [or “Independent”] Innovation and Achieve Breakthroughs on General Key Technologies

First, with a base of indigenous innovation, increase the introduction and absorption of technology, and achieve breakthroughs in the key basic technologies of AI. China's AI industry currently has an inverted triangle structure—heavy on applications, light on foundations—bringing many uncertainties for AI development. Therefore, it is necessary to start from research in key general technologies such as cloud computing, big data, and machine learning to resolve basic security risks. Stand independently and implement major technical research projects with the goal of secure and controllable development of key technologies such as sensors, smart chips, and basic algorithms. At the same time, increase technology introduction and conduct external technical cooperation with an open and pragmatic attitude, to achieve technology digestion, assimilation, and re-innovation. Relying on a development model combining indigenous innovation and technology introduction, formulate a secure and controllable development roadmap for key AI technologies, and solve the “stranglehold” problem in the foundational links of AI.

Second, increase research on AI security technology, and improve AI security protection capabilities. In view of the current situation, in which AI security research lags behind application research: aim at AI security issues and risk pain points; guide many parties to increase investment; vigorously support research institutes, AI enterprises, and cyber security enterprises to deepen AI security attack and defense technology research and build AI security attack and defense drill platforms; designing an all-round and integrated security protection technology architecture with independent intellectual property rights in the AI base layer, technology layer, and application layer. Accelerate the development of security protection products, and explore and promote security best practices in key applications. Ensure the simultaneous advancement of AI security technology research and the application industrialization process.

(2) Improve Laws and Regulations; Formulate Ethics and Norms

First, establish and improve existing laws and regulations to deal with the issues of privacy security risks and subject liability brought about by AI. First, promote the construction of laws and regulations for the protection of personal information. At present, some of the existing laws and regulations in China already involve personal privacy protection, but the terms are more dispersed and cannot form a complete system. It is necessary to speed up unified legislation and draw on the relevant provisions and practical experiences of the European Union’s General Data Protection Regulation to: promote formulation of the Personal Information Protection Law of the People’s Republic of China; clarify the scope of personal information; protect a user’s right to know; strengthen the responsibilities of data processors; handle the relationship between open data use and personal privacy protection in accordance with the law; ensure reasonable requirements for AI data resources are met; and prevent excessive use of personal information. Secondly, improve the current laws and regulations to clarify the subject liability issue. Current laws and regulations lack constraints on the application of AI products and systems. The liabilities and obligations in the design, production, sale, and use of artificial intelligence products and systems are not clearly defined. With regard to fairness and justice problems and accidents, potential violations of laws and regulations, and the resulting property damage, personal injury, and social harm that may be brought about by AI, there is a need for strengthened research, forward-looking legislation, and further clear subject constraints and division of responsibilities at the legal level.

Second, study and formulate ethical and moral norms to adapt to the social behavior model of human-computer symbiosis in the intelligent age. Guided by the government, relevant universities, research institutions, enterprises, etc.: establish AI ethics research institutions; strengthen the overall planning of AI ethics research; track the impact of AI on ethics, morals, and security risks; and build systematic ethical and moral norms to constrain AI research goals and directions, as well as the behavior of system designers and developers. Advocate and strengthen algorithmic ethics, improve the consistency of AI with human values, avoid an AI arms race between countries, ensure that the entire society can share in the economic prosperity created byAI, and advance the healthy development of human society.

(3) Improve the Supervision System and Guide Industry Toward Healthy Development

First, improve the government's supervision system, and optimize the administrative framework. The increasing integration of AI with traditional industries will result in a large number of new forms and new models. The security risks of related industries have become intertwined and complicated. Supervision work involves multiple government departments, and the government supervision system should be optimized and improved according to actual developments. Enhance the support capabilities of regulatory technology. Conduct safety supervision pilots for the pioneering areas of intelligent recommendation, automatic driving, intelligent service robots, smart home, and other AI applications. At the same time, promote the timely adjustment of the governance structure of administrative bodies, while ensuring the impact of new technology development on the industry and society is within the controllable range, giving intelligent industrial technology and industrial innovation developments space to mature.

Second, constrain the market behavior of enterprises, and strengthen corporate self-disciplinary responsibilities. In the era of big data, AI enterprises (especially large Internet platforms) can access massive amounts of data. Learning from and using data involves multiple levels including personal privacy, public safety, and social governance. Therefore, enterprise platforms are important carriers of data, and the regulatory compliance and legality of their behavior is more important. The security of training data and algorithmic decision-making will directly affect user rights and interests and societal security. It is necessary to: increase the supervision of enterprises; define the boundaries of government and enterprise responsibility; guide enterprises to emphasize their own economic benefit while at the same time strengthening their sense of social responsibility; strengthen self-discipline and self-government; ensure the legality and security of data collection, storage, and circulation; and properly apply AI technology.

(4) Strengthen Standards as Guidance; Build a Security Assessment System.

First, formulate standards related to AI security to make up for existing gaps. Joint research institutes, technology companies, and third-party assessment agencies jointly promote the development of national standards, industry standards, and alliance standards related to AI security, focusing on research related to AI training algorithms, decision models, and other related technical security requirements. Form a series of security standards for cybersecurity, data security, algorithm security, and application security of intelligent products and systems, as a unified reference for the security design and test verification of AI products, enhancing the security and reliability of AI products.

Second, guided by security standards, carry out security assessments and evaluation capacity-building. Guided by AI security standards, joint research institutions and technology companies jointly tackle security assessment and evaluation techniques for artificial intelligence products, applications, and services, and gradually accumulate knowledge resources such as security test sample libraries and knowledge libraries to form shared data sets. Develop a set of R&D test tools, build a public service platform for AI security testing and certification, establish an evaluation expert database and evaluation mechanisms, and realize the evaluation and evaluation capability of AI security. With technical means as a support, pragmatically avoid the problematic defects and security risks of AI products and applications.

(5) Promote Industry Collaboration; Promote Technology Security Applications.

First, promote collaboration between AI enterprises and cybersecurity enterprises to improve the depth of technology application and product maturity. AI enterprises have accumulated technology related to machine learning algorithms, and cybersecurity enterprises have data resources and security protection application scenarios, such as vulnerability databases and incident databases. Promote deeper cooperation between AI enterprises and cybersecurity enterprises. Leverage existing cybersecurity knowledge base resources to undertake data analysis and feature learning to improve the cybersecurity protection product self-defense capabilities such as vulnerability discovery, threat warning, and attack detection, and iteratively upgrade and optimize product maturity in application scenarios to jointly promote the deep application of AI technology in the field of cyber and information security.

Second, promote cooperation between AI enterprises and public security enterprises, expand the application of technology, and enhance social governance capabilities. AI is gradually developing into a new universal technology, which promotes the transformation of traditional industries through automation and intelligentization. In the field of public security, mature general technologies in AI, such as computer vision, speech recognition and synthesis, and natural language processing, have started to be applied to fields such as security monitoring, data investigation, public opinion management, etc. Vigorously promote the coordinated development between AI enterprises and traditional public safety enterprises; jointly explore the needs of integration; aim at the development pain points of the profession; form integrated solutions; accelerate the application on the scene; and promote the widespread application of AI in public security, such as in smart public security, intelligent transportation, and smart finance; to improve the level of intelligentization of national social governance.

(6) Increase Personnel Training; Improve the Job Skills of Personnel

First, strengthen the construction of a talent corps in AI technology and industry, and reduce the risks of talent shortage for the development of the industry. First, based on school education, vigorously implement the relevant documents of the Ministry of Education such as the "Artificial Intelligence Innovation Action Plan for Higher Educational Institutions." Add AI-related majors in qualified universities, enlarge recruitment quotas, strengthen professional education and vocational education, and provide personnel with AI thinking, skills, and human-machine collaborative operation capabilities. Fund key university development labs and innovation centers to increase research talent training. Second, increase enterprise training, and aim at the current shortage of AI talent, encouraging AI technology enterprises to establish training institutions or jointly build laboratories with schools, to conduct technical and applied research, and to cultivate available talent in practice. Third, strengthen the introduction of foreign talent, formulate talent policies to introduce special talent, support universities or enterprises to introduce world-class leading talent, directly set up R&D centers abroad, and absorb local talent there for our own use. Encourage industry acquisitions and enterprise use of capital to retain or acquire teams from foreign companies with core technologies.

Second, optimize the personnel training system, improve the job skills of personnel, and reduce the unemployment risks caused by AI. For social changes in employment caused by the development of AI industry, first of all, the specializations of universities and vocational schools, etc., should be dynamically adjusted, and the recruitment quotas for majors that can be replaced should be gradually reduced or even eliminated to ensure that students can apply what they have learned and prevent “graduation into unemployment.” Secondly, encourage currently working people to establish a lifelong learning outlook; improve on-the-job training and re-employment training systems; update the employment skills of currently working people through multifarious training; promote higher-quality employment for currently working people; and reduce the social impact of the unemployment risk caused by AI.

(7) Strengthen International Exchanges; Address Common Security Risks

First, strengthen technical research cooperation, resolve the current stage’s bottlenecks in AI technology, and promote the mature development of AI. For current deep learning technology bottlenecks, such as poor robustness against sample, non-explainability, incomplete information, and weak adaptability to uncertain environments, international cooperation in technical research can be carried out by setting up research centers abroad and organizing international technical exchanges. Track the latest technological achievements, jointly strengthen research on new technologies such as transfer learning and brain-like learning, solve security hazards and regulatory problems such as algorithmic black boxes and algorithmic discrimination, enhance the robustness and security of AI decision-making, and promote a move from specialized intelligence toward general intelligence in AI.

Second, actively participate in the formulation of standards to jointly address the security issues and ethical impacts of AI. Actively participate in the ISO/IEC JTC1 SC27 and SC42 data security, privacy protection, problem responsibility determination, and trustworthy AI standards development work. Closely track the IEEE P7000 series of AI security- and ethics- related standards; strengthen exchange with the major world standardization organizations ISO, IEC, ITU, ETSI, NIST, etc.; establish exchange mechanisms with advanced countries and leading enterprises; share governance experience; and promote the continued and secure development of AI in China. At the same time, the governments of the world's major countries should establish an AI development exchange and dialogue mechanism, seek cooperation and win-win amidst competition, formulate AI ethics and moral standards that are generally observed by the international community, avoid the malicious applications of AI technology, and effectively guarantee that AI truly benefits humanity.

(8) Increase Social Propaganda and Scientifically Handle Security Issues

First, carry out propaganda and education to strengthen the awareness of security protection. AI technology is inherently neutral, but people could potentially apply it maliciously. For example, machine learning can be used for personal information mining to quickly obtain private information; and information content synthesis technology can enrich online scams and make them more confusing. Therefore, for new security incidents using AI technology, it is necessary to strengthen publicity, publicize the causes, carry out education on security measures for all people, cultivate citizen awareness of privacy protection and fraud potential, and reduce personal property losses and adverse social effects caused by malicious AI application.

Second, strengthen the guidance of public opinion and establish a proper development concept. Although the development of AI technology has achieved remarkable results at present, there are still some common problems. Many industrial applications, such as automatic driving, intelligent robots, etc., are in the stage of exploration and experimentation, and they are immature and may lead to security incidents. In view of the security incidents exposed by the current AI technology in the industrial application process, we should strengthen the proper direction of public opinion propaganda, reduce social anxiety, guide people to properly view the security issues in the development of new technologies, and create openness for the development of AI technology and industrial advancement with relaxed social environment.