Nov. 16, 2023
About a year after “Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People” was released by the White House Office of Science and Technology Policy, the Biden administration rolled out an executive order on the safe, secure, and trustworthy development and use of artificial intelligence. The initiative is a wide-ranging framework for shielding consumers, workers, communities, and national security from potential AI-induced hazards. The executive order shows promise and a willingness by the federal government to take a role in protecting human and civil rights in the age of AI. But there is still a lot of work ahead, and academia has a key role to play.
What Does the Executive Order Say?
At its core, the executive order seeks to establish foundational guidelines for AI’s future trajectory. For companies developing AI tools, transparency in safety testing before system deployment might soon become a standard requirement. This approach, which is gaining resonance globally, provides the foundation for structured regulatory systems as AI technologies become more commonplace in all aspects of life, from education to health care, to systems of justice and public safety.
The executive order has a profound and widespread impact across the federal government and sets in motion a series of targeted actions and responsibilities. At the forefront is the Justice Department’s Civil Rights Division, which is one of the federal agencies tackling the complex issue of algorithmic bias. This involves investigating instances where AI systems perpetuate or amplify bias based on race, color, gender, or other protected status in an effort to apply current anti-discrimination laws to new forms of civil rights harm. The Civil Rights Division’s Employment Litigation Section, for example, works closely with the Equal Employment Opportunity Commission in the enforcement of Title VII of the Civil Rights Act of 1964 that prohibits unlawful discrimination in employment practices, an ever more critical role as AI becomes increasingly integrated into recruitment and hiring processes.
The Civil Rights Division of the U.S. Department of Justice is also working closely with the Consumer Financial Protection Bureau (CFPB) and the Federal Trade Commission (FTC) in examining what federal enforcement tools are available to protect against AI-related risks and challenges, such as algorithmic-derived discrimination and consumer harms. CFPB takes on the significant role of safeguarding consumers in the finance marketplace to ensure that AI and automated systems are not used in ways that could, for example, lead to discriminatory practices in lending, home appraisals, credit scoring, and in other contexts. Similarly FTC is exploring how its “unfair and deceptive acts and practices” authority under Section 5 of the Federal Trade Commission Act can be applied to algorithmic harms.
The National Institute of Standards and Technology (NIST) within the Department of Commerce has provided guidance on how entities can conduct effective AI research and testing to assess AI risk. Through multiple publications, including the NIST AI Risk Management Framework, NIST has issued industry-wide standards for researching and deploying trustworthy and responsible AI. In today’s era, where disinformation campaigns are rampant, the use of emerging technologies to manipulate and undermine democratic institutions and the rule of law is a real threat. Against this backdrop, NIST has been proactive. NIST’s 2022 report, “Identifying and Managing Bias in AI,” complements the AI Risk Management Framework by urging us to assess AI bias comprehensively.
Also in the economic sector, the Council of Economic Advisers and the Department of Labor are charged with dissecting potential job market shifts due to AI, and seeking solutions to any challenges posed including displacement of labor and discriminatory hiring practices. Chief AI officers have been invited to join agencies across the federal government to oversee these transitions with a new framework set by the Office of Management and Budget. The executive order’s emphasis on civil rights and equity sets the stage for more profound discussions about allegiance to core values. With its pronounced focus on upholding civil rights and ensuring equity, it propels us into deeper contemplation on how to uphold and integrate fundamental American values within the burgeoning domain of AI.
A Pivotal Opportunity for Colleges and Universities
A broad spectrum of stakeholders is needed to address the complex challenges of regulating AI, and the academic community has a pivotal role to play. Researchers and scholars are primed to offer their research expertise, bridging the gap between the federal government and the diverse communities that universities partner with and serve. Additionally, universities are essential to cultivating a diverse generation of public interest technologists who can fill the many new jobs and leadership positions opening up in the federal government.
Academic institutions also have important new opportunities to engage with AI research through federally funded programs such as the National AI Research Resource Pilot that will be spearheaded by the director of the National Science Foundation. This initiative will offer AI researchers and students access to new AI resources and data, coupled with possible expanded grants for AI investigations in health care, environmental sciences, and other critical sectors through the NSF directorates.
The executive order on AI marks a significant milestone in the journey toward responsible and equitable AI development and use. As we embark on this path, the role of academia cannot be overstated. From guiding policy decisions with evidence-based research to training the next generation of AI professionals, educational institutions are crucial partners in this endeavor. Together, with coordinated efforts across government, academia, civil society and think tanks, and industry, the a key research inquiry will be how best to harness the transformative power of AI while safeguarding democratic values and ensuring a future where technology serves the common good.
About the authors:
Margaret Hu is Taylor Reveley Research Professor and Professor of Law and and Director of the Digital Democracy Lab at William & Mary Law School. Her research focuses on the intersection of civil rights, national security, cybersurveillance, and AI. She is author of several notable works, including Biometric Cyberintelligence and the Posse Comitatus Act, Algorithmic Jim Crow, and Biometrics and an AI Bill of Rights. She is editor of Pandemic Surveillance: Privacy, Security, and Ethics (Elgar Publishing 2022).
Alberto Rodriguez-Alvarez is Senior Program Manager for Public Interest Technology at New America, where he manages internal capacity-building and external partnership development. His PIT interests include technology & policy, public innovation and digital government.