Prem M. Trivedi
Director, Open Technology Institute, New America
Last week was defined by big-ticket U.S. government activity on artificial intelligence (AI). On October 30, President Biden issued an executive order on the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.” The document is a sweeping, hundred-plus-page effort that directs federal agencies to pursue multiple policy objectives central to the responsible development and use of AI. Executive Order 14110 (“the EO”) focuses on directing federal agency activities, but its effects will be felt throughout our governance ecosystem, including government, the private sector, academia, and civil society. The order instructs federal agencies to advance key policy objectives, including ensuring AI’s safety and security, promoting responsible innovation and competition, supporting American workers, advancing equity and civil rights, protecting consumer interests, safeguarding privacy and civil liberties, and promoting global cooperation on AI governance.
Shortly after President Biden signed the order, the Office of Management and Budget issued a draft of its implementing guidance (“OMB guidance”) for public review. Administration officials have emphasized that these steps constitute the “most significant” action on AI that any government has undertaken. Whether or not one agrees with this assertion, the comprehensive and ambitious nature of the Biden administration’s effort to alter the national and global governance landscape is hardly up for debate.
What should we take away from last week’s developments? This analysis outlines key elements of the executive order without attempting to be exhaustive. In particular, we focus on requirements related to safety and security, protecting civil rights and civil liberties, and mitigating harms to people.
Section 4 establishes a number of requirements on security and safety in AI and is the section of the order that focuses most comprehensively on managing product safety risks. It directs the National Institute of Standards and Technology (NIST) to develop guidelines and best practices “with the aim of promoting consensus industry standards” for trustworthy AI systems. This guidance will include a specific resource on generative AI that will accompany the AI Risk Management framework. Importantly, the EO directs the Department of Commerce to create a reporting framework for companies developing dual-use foundation models that could pose security risks. The Department of Commerce is also required to assess the risks posed by synthetic content and develop guidance for watermarking and authenticating U.S. government digital content. Additionally, Section 4 prioritizes managing AI-specific risks to critical infrastructure and cybersecurity and the intersection of AI and chemical, biological, radiological, and nuclear threats.
Multiple sections of the EO take a people-centric approach to discussing potential harms from AI systems. Section 8 focuses on protecting “consumers, patients, passengers, and students” from a range of potential harms that arise from AI, including fraud, discrimination, and threats to privacy. It directs agencies to address these threats across various sectors of the economy, including healthcare, transportation, and communications networks. Section 6 reflects the Biden administration’s focus on supporting workers and ensuring employees’ wellbeing through the significant economic shifts that AI will engender.
The EO focuses considerably on the need to center civil rights and civil liberties in an AI governance regime. Section 7’s focus on advancing equity and civil rights builds on the White House’s Blueprint for an AI Bill of Rights issued last October. It directs the Attorney General to address civil rights violations and discrimination related to AI with a focus on the use of AI in the criminal justice system. Section 7 also directs agencies to prevent and remedy discrimination and other harms that could occur when AI is used in federal programs and to administer benefits. Several agencies are directed to take actions to strengthen civil rights enforcement in various sectors of the economy, including housing, financial services, and federal hiring practices.
Section 9 focuses specifically on mitigating privacy risks that arise from large-scale data collection and AI models’ inferences about people. In a welcome move, the EO directs federal agencies to invest in privacy enhancing technologies and methods—such as differential privacy—to ensure that we can reap the benefits of advanced analytics while limiting the risks to people’s privacy. Additionally, the White House’s messaging—including its accompanying fact sheet for the EO—included a push from the President to Congress to “pass bipartisan federal privacy legislation to protect all Americans.”
The accompanying OMB guidance applies to all federal agencies’ use of AI except for national security systems. The guidance provides further detail on the EO’s requirements that each agency designate a Chief Artificial Intelligence Officer, remove barriers to responsibly using AI, and submit AU use case inventories to OMB, among other stipulations. Perhaps most significantly, the OMB guidance establishes two important categories of “safety-impacting AI” and “rights-impacting AI.” These designations each are accompanied by their own requirements and minimum practices, which include:
The guidance goes on to establish additional minimum practices for rights-impacting AI:
Stepping back, here are a few broader observations on the impact of the EO and the draft OMB guidance.