Public Interest Technologists React to Executive Order on AI

Blog Post
Nov. 16, 2023

During the past year's rapid onset of publicly available artificial intelligence technologies, public interest technologists from academia, civil society, government and industry have played a vital role in centering civil rights, justice and communities in public discussions about how government should regulate AI.

On Oct. 30, 2023, the Biden-Harris Administration released an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, a sprawling 20,000-word set of guidelines and directives for federal agencies.

Here, four leading public interest technologists respond to some of the Order's core components, highlighting signs of progress and areas for improvement in ensuring that AI technologies serve the public interest.

Cybersecurity

Afua Bruce, AnB Advisors, former Executive Director of the National Science and Technology Council at the White House.

The Executive Order takes seriously AI’s many implications for cybersecurity, addressing the need for both offensive and defensive AI-related cybersecurity measures. The EO’s clear direction to “Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy,” outlines actions agencies will take to evaluate and audit cybersecurity risks in AI systems. It pushes agencies to actively consider and adopt ways to use AI to protect critical federal systems. The Departments of Defense and Homeland Security, for example, are tasked with piloting projects that use AI to detect and fix cyber vulnerabilities in government systems.

Cybersecurity attacks against individuals and organizations continue, resulting in the loss of finances, business, and safety. Bad actors take advantage of vulnerabilities in systems to execute cybersecurity attacks. With the rush to market for AI-enabled tools, and the additional complexity AI tools insert into systems, the EO’s emphasis on cybersecurity is necessary. As with technology in general, tools that can be used for harm can also be used in more positive ways. Some cyber professionals today are combatting cybersecurity threats by using AI to monitor technical systems for behavior patterns and predict the outcome of unusual activity. In addition to the calls for AI security standards and AI response plans in the EO, the Federal government should also increase incentives for research inside and outside of government for cyber-specific AI tools, including tools trained on events across multiple organizations and industries.

Also encouraging in the Executive Order are the references to cybersecurity embedded throughout the EO – not only in the Safety and Security section. To support the protection of consumers and patients, the EO encourages standards to address AI-enhanced cybersecurity threats for personally identifiable data, often held and shared by healthcare organizations and businesses. And to build AI workforce capacity within the government, the Order explicitly mentions the National Cyber Director as an important collaborator. By recognizing cybersecurity as an overarching concern of AI tools, the EO takes strides in protecting AI tools – and the communities, government agencies, and businesses that use them.

Advancing Equity & Civil Rights

Charlton McIlwain, Vice Provost and Professor of Media, Culture, and Communication, New York University.

The Executive Order’s directive to protect civil rights and advance equity is strikingly clear and straightforward. It communicates one message and repeats it in different ways: don’t use AI to discriminate; don’t enable discrimination through unchecked algorithms; teach people how to not to discriminate using algorithmic or machine learning learning systems and tools; and investigate and prosecute those who fail.

The audience to whom the order speaks is also clear: actors in housing, commerce, social services and criminal justice in which the federal government mediates or moderates outcomes to protect individual rights. The Executive Order’s clarity speaks to the egregious abuses (in both the private and public sectors), already too numerous to count, where algorithmic systems have directly or indirectly caused significant harm by foreclosing citizens’ rights – primarily those of protected groups.

The EO’s specificity and concreteness make it clear that anyone whose automated or AI-powered systems directly or indirectly violates civil rights are officially on the federal government’s radar. Its investigative and prosecutorial mandates give it some teeth that hopefully will embolden law enforcement at every level to use its powers to protect civil rights, maximize safety and prevent harm.

While the EO is long on advancing civil rights, it is short on advancing equity. In fact it uses the term “equity” without defining it, and we are far from having any kind of common sense about what tech equity really means. We should not conflate equity with harm reduction – as this section does inadvertently by not concretely defining it. I understand that it is easier to focus on harm reduction; but equity has to mean more than this, even if it is difficult to codify in the language of executive orders, or - frankly - too difficult for policymakers, technologists and everyone in between to undertake.

Advancing equity requires we simultaneously grapple with both our past and our future. Advancing equity requires undoing and dismantling the infrastructures we have built over the past 60-plus years that make algorithmic systems prone to discrimination. Advancing equity requires that we invest in imagining different futures through systems design, research and development, and rigorous testing.And advancing equity requires that we fundamentally restructure, reposition and invest in a different cast of characters to lead the work of imagining different futures. If we want an AI future that both protects civil rights and advances equity, then we have to shift the power to imagine, design and create different futures to those who have historically borne the disproportionate brunt of civil rights harms.

Ensuring Responsible and Effective Government Use of AI

Beth Simone Noveck, Professor and Director of Burnes Center for Social Change, Northeastern University.

While there’s a lot to like in the Executive Order, we have to ask: are we focusing so much on the risks that we are failing to invest in and maximize the potential for AI to do good?

The federal government’s AI pronouncement stands in stark contrast to the “responsible experimentation approach” adopted by the City of Boston —the first policy of its kind in the US, which encourages public servants to “try these tools for yourselves to understand their potential.”

AI can simplify complex governmental processes, making them more accessible to the average person in plain English or other languages and in oral formats for those who are low literacy. In communities plagued by transit issues, urban planners traditionally used intermittent surveys. Now, AI can combine data from traffic cameras, ticketing systems, and GPS to detect disparities in transport resources between rich and poor neighborhoods and enhance urban planning.

The executive order is regrettably silent on any mandate to study and promote how AI can help advance – and mitigate the risks to – democracy. AI could make it easier for governments to listen to their citizens. Instead of voluminous comments that no one has time to read, generative AI could make it easier to categorize and summarize citizen input. At Massachusetts Institute of Technology Professor Deb Roy uses AI to create a “digital hearth” that analyzes and extracts learning from resident conversations. In 2022, the City of Cambridge used Roy’s Cortico technology to run a series of issue-based community conversations designed to get resident feedback on the choice of the next City Manager.

While it's crucial to be wary of AI's risks, it's equally important to embrace its positive capabilities. As the federal government moves forward to create policy on how federal public servants use AI, it would do well to learn from Boston to ensure that we resist fear-mongering in favor of approaches geared towards learning how to use these powerful new technologies for public good.

Promoting Innovation & Competition

Suresh Venkatasubramanian, Professor of Computer Science at Brown University, Co-Author of the Blueprint for an AI Bill of Rights.

The EO is surprisingly bold in the Promoting Innovation and Competition section. Broadly, the actions it describes can be broken down into three buckets:

  1. Making it easier to draw in AI and STEM talent from other countries, and making it easier for them to stay here.
  2. Promoting more resources to help with innovation, such as the National AI Research Resource (a kind of public AI cloud system); training programs to funnel researchers into AI; clarifying copyright and IP protections; and pursuing innovation in health care, energy infrastructure, and AI for science.
  3. Encouraging more competition by empowering the FTC to mitigate against concentration of power in the tech industry, and providing more support for small businesses.

While I see these actions as sensible and concrete, I do think they are incomplete.

First, there’s a huge focus on drawing talent from outside the U.S.. But there’s a broad base of talent right here, especially in communities historically underrepresented in tech; investing in those communities would spur innovation while serving the broader goal of advancing equity.

Secondly, the laser focus on STEM is understandable, but misses the degree to which effective AI work is really sociotechnical rather than purely technical. This requires more STEM education for students and researchers in the social sciences and humanities, and broadening the diversity of perspectives in the field of AI broadly speaking.

Lastly, while the EO’s optimism about using foundation models like large language models for health care, energy infrastructure and science is an important reframing of AI’s potential, it must be balanced by an emphasis on responsible practices to make sure the insights gleaned from these models is validated scientifically.

This article is part of the November 2023 PIT UNiverse newsletter on Defending Democracy, from the Public Interest Technology University Network.