FAT Approaches Governments Can Implement
There are two approaches to promoting FAT around high-risk algorithmic systems that governments are best suited to implement: enforcement mechanisms and procurement guidelines. This section provides an overview of these two mechanisms and discusses their strengths and limitations, and how these approaches contribute to overall efforts to promote FAT around high-risk algorithmic systems. This section includes a discussion of several different enforcement mechanisms governments can take in order to promote greater FAT around high-risk algorithmic systems. These include establishing a regulatory body or agency dedicated to algorithmic accountability issues, taking executive action, and passing legislation. This section also outlines how governments can utilize the public procurement process to incentivize the development of algorithmic systems that reflect adequate FAT.
Enforcement Mechanisms
Enforcement is a critical component of the FAT process, as it provides accountability and can promote transparency. Policymakers across the globe, including certain U.S. legislators and governments in the EU, have been exploring regulatory and legislative action around algorithmic systems. As governments begin exploring how to hold relevant actors accountable for harms caused by AI systems, it is important that they consider the geographic scope (and limitations) of any proposed efforts, and that algorithmic systems are likely deployed in a multinational manner, beyond where one country can enforce regulations on the back end. This is critical for ensuring that any legislative methods for promoting FAT are effective.1
In addition, should governments choose to pursue a regulatory model for high-risk algorithms that are deployed by internet platforms and government agencies, they must ensure that any regulatory action considers the different components and actors of the AI-system life cycle, operates using a clear, consensus-based definition of high-risk algorithmic systems, and assigns responsibility and liability for damages to the actors that are best suited to address potential risks.2 Depending on the algorithmic system, regulatory action could target the developer, the entity or individual deploying the product, or other actors, such as the distributors or importers, service providers, or end users.3
Some methods of enforcement and regulation that governments, researchers, and civil society organizations have proposed include:
- Forming a regulatory body that imposes binding regulation on entities that develop or deploy high-risk algorithmic systems. Experts argue that because algorithmic systems are opaque and can generate small but severe long-term harms across industries, they should be subject to additional regulatory scrutiny.4 Such a regulator could require that private companies and government agencies obtain pre-market approval from the agency before deploying an algorithmic system and could conditionally approve the use of an algorithm in limited circumstances. If an entity deploys an algorithm outside of the agreed upon circumstances, the regulator could subject the entity to legal consequences.5 The regulator could also allow injured parties to obtain damages.6
While this approach could provide accountability in situations where high-risk algorithmic systems generate significant harm, it is limited by the fact that algorithmic systems are continuously evolving based on new data and inputs.7 As a result, it is difficult to predict what exact outputs an algorithm would produce in a given situation and how to cleanly draw lines between high-risk algorithmic systems and systems that pose less of a threat.8
Alternatively, the European Parliamentary Research Service has suggested that a central regulatory body for algorithms be empowered to focus on three critical characteristics of algorithmic systems: complexity, opacity, and dangerousness. Experts argue that a central regulatory agency (sometimes discussed as an “FDA for algorithms”)9 would be better equipped than individual agencies to tackle incidents of algorithmic harm, as it would be able to centralize talent and develop more comprehensive best practices and review procedures.10 Although establishing a regulatory body could help promote FAT around the development and deployment of algorithmic systems, the process of establishing such a body would be lengthy, require significant investment of resources and talent, and would not address concerns around FAT in the short term.
- Creating an agency responsible for certifying the safety of an algorithmic system. Under this proposal, the certifying agency would rely on a legal liability framework that subjects developers and vendors of these certified systems to tort liability.11 Algorithmic systems that are uncertified and commercially sold or used would be subject to stricter liability. Accordingly, courts would be responsible for deciding whether an algorithmic system is within the scope of the agency certification process and for assigning responsibility to relevant actors when a system produces tortious harm. As some experts have noted, this type of regime would encourage developers to think more critically about the costs associated with algorithmic system harms. It would also enable victims of any harm to seek compensation.12
- Relying on consumer protection authorities, such as the Federal Trade Commission (FTC) to apply consumer protection regulations to user agreements, therefore generating greater accountability for operators.13 In addition, the FTC could push developers and deploying entities to provide greater transparency around their algorithmic systems, which could include algorithmic audits, to facilitate the FTC’s own evaluations. The FTC could also help generate accountability by requiring regular reporting to the agency. Reporting that is not required publicly could help address company claims that providing too much transparency around their algorithmic systems could amount to giving away trade secrets while also enabling oversight and mitigation of potential harms caused by these algorithmic systems.14 This level of transparency, however, does not increase public insight into how these systems work and what impacts they may generate.
- Establishing a government-led incentive or penalty-based system in which entities provide subsidies or funding to adopt certain FAT practices or are taxed, fined, or made to pay fees if they do not. In other industries, governments have created tax incentives to encourage responsible corporate behavior,15 such as those used to promote the use of environmentally friendly technologies, including electric cars and solar energy.16 A similar structure could be used to encourage internet platforms to implement FAT measures including voluntary labeling or certification schemes, and algorithmic audits. This approach could be useful for addressing lower-risk algorithmic systems that don’t particularly require mandatory guidelines around how they are used and what FAT mechanisms need to be implemented in order to offset their risks.17 However, given the vast potential harms that could arise from internet platform development and deployment of high-risk algorithmic systems, companies should pursue measures to promote FAT independently regardless. For example, the use of taxes, fines, and fees would make the most sense if applied to high-risk algorithmic systems. However, the policies that determine if and when an entity is fined must be detailed and clear, and must carefully account for potential impact on smaller companies and therefore market competition. Any efforts to impose penalties on companies operating algorithmic systems must be careful to not produce any unintended harms to fundamental rights.18
In April 2021, the European Commission released a draft of its proposed AI regulation, which seeks to rein in and prohibit certain uses of high-risk algorithmic systems.19 While the proposal makes some notable strides, its provisions do not clearly and broadly implicate internet platform use of these systems. In comparison, the EU’s draft of the Digital Services Act (DSA),20 released in December 2020, better addresses internet platforms’ use of algorithmic systems that could generate harms, including targeted advertising and recommendation systems. The most recent draft of the DSA includes provisions that would require internet platforms to provide greater transparency and user control around such algorithmic systems.21
The European Commission’s draft AI regulation also includes provisions that impact government use of high-risk algorithmic systems. For example, it introduces prohibitions on certain uses of AI-based social scoring systems by public authorities and on the use of “real-time” remote biometric identification systems by law enforcement in public spaces.22
The U.S. government has also taken some high-level steps to promote FAT around algorithmic systems. In 2019, the government introduced an Executive Order (EO) that aims to establish and maintain U.S. leadership in AI.23 While the EO does not offer granular guidance on mechanisms for promoting FAT it does encourage the government to “train current and future generations of American workers with the skills to develop and apply AI technologies.”24 One of the barriers to establishing processes and implementing mechanisms for promoting FAT around algorithmic systems at the government level is that government agencies often lack the necessary technical talent. The EO provides valuable high-level guidance that the government should institute mechanisms and programs for recruiting and upskilling technical talent and staff in government agencies. In addition, in response to the EO, the National Institute for Standards and Technology (NIST) released a plan calling for the federal government to engage in long term AI standards development activities. NIST has also stated that it will collaborate with the private sector and academia to establish AI standards which address broader societal, governance, and privacy issues.25 These standards are still in development, but they could offer valuable guidance for internet platforms and government agencies as they seek to promote FAT around their development and use of algorithmic systems in the long term.
In 2020, the U.S. government issued another EO, which aims to promote the use of trustworthy AI in the federal government.26 This second EO establishes a set of high-level principles to which federal government use of AI must adhere, including reliability, safety, security, resiliency, transparency, and accountability.27
Going forward, the U.S. government should promote greater FAT by issuing an EO that requires both internet platforms and government agencies to evaluate any high-risk algorithmic system before it is deployed. The EO should require these systems be subject to continuous and periodic reviews to account for changes in systems, how they operate, and what risks they produce. In addition, the U.S. government should supplement this EO with clear rules that require companies and government agencies to review their algorithmic systems—particularly their high-risk algorithmic systems—before they are deployed and to mitigate any identified harms. If these entities fail to do so, they can be held liable by a regulator.
In addition, the U.S. government should pass comprehensive federal privacy legislation that includes specific references to the development and use of algorithmic systems. Such privacy legislation should also require transparency, impact assessments, and regular audits from internet platforms to prevent algorithmic tools from being used in ways that disparately impact disadvantaged communities.28 These rules should also empower the FTC or a new Data Protection Authority to enforce requirements and develop regulations. As previously noted, federal privacy legislation may still need to be supplemented with additional policy measures, such as a standalone bill that requires FAT mechanisms like algorithmic audits and impact assessments to prevent abusive users of algorithmic tools and mitigate discriminatory harms.29
Procurement
The public procurement process presents an opportunity for governments at all levels—local, state, and federal—to incentivize the development of AI-enabled technologies and services that are fair, accountable, and transparent.30 In fiscal year 2019 alone, the U.S. government spent over $20.7 billion on information technologies, computer software, and engineering-related services, including AI-powered technologies.31 Because it can be difficult for the government to develop algorithmic systems internally—due to high costs and not enough skilled in-house technologists—government agencies often acquire AI-enabled tools through public procurement processes. According to a recent report commissioned by the Administrative Conference of the United States (ACUS), 47 percent of AI in use among the federal government was developed externally.
On a basic level, the public procurement process consists of a government agency identifying the need for a good or service, issuing a request for proposal (RFP), seeking responses from companies until a closing date, and entering into a contract with the lowest bidder.32 In the United States, companies that seek to win an agency’s contract must meet the basic quality standards required by law, in addition to the context-specific safety and performance requirements indicated in the RFP. Unfortunately, current procurement standards are outdated and insufficient for regulating emerging technologies, but policymakers could update these standards and outline clear requirements for promoting FAT around algorithmic systems within them.33 By requiring FAT-promoting mechanisms such as audits of algorithmic models, algorithmic impact assessments, and disclosure of training and testing data in an RFP, the public procurement process could effectively promote FAT around the government’s algorithmic systems.
The European Commission’s High Level Expert Group on Artificial Intelligence recommended the strategic use of public procurement to fund innovation and develop trustworthy AI by ensuring that governments identify, assess, and appropriately address potential risks, and add eligibility and selection criteria for algorithmic systems.34 In addition, the City of Amsterdam has drafted “standard clauses for municipalities for fair use of algorithmic systems” that seek to operationalize ethical AI principles intended to be included in the procurement contract of any government acquisition involving AI technologies.35
Although procurement processes present a powerful demand-side opportunity for the government to incentivize the private development of FAT around AI, this mechanism also has its limitations. First, public procurement only directly impacts the government’s use of algorithmic systems and does not guarantee that private sector development and deployment of algorithmic systems outside of a government context will also promote more FAT around AI. Second, for “soft/custom” goods and services that undergo the procurement process, like tailored AI tools (e.g., an algorithm that attempts to assign students in a city to schools in such a way to make each school’s population geographically and racially diverse), achieving a high-quality product involves specialized skills and adequate knowledge of the deployment context. Contracted engineers from the private sector may not possess a nuanced understanding of the problems an algorithmic system is aimed to address, including the complex legal, regulatory, and organizational environment in which the tool will be deployed. Where agencies must contract out the development of an AI tool, in-house agency personnel who have expertise on the problem they seek to address should work closely with the contracted private sector experts to provide the necessary contextual understanding.
Overall, public procurement is a critical tool to promote FAT around AI use. However, public procurement only directly impacts public use of algorithmic systems so the development of these systems should ideally happen in-house to allow for greater quality control and auditing. That said, as long as the government contracts with private vendors, procurement standards should be updated to regulate emerging technologies as much as possible. Further, as policymakers consider the role government agencies can play in overseeing the use of algorithmic systems, it is critical that these entities increase their in-house technical expertise so they can adequately carry out these functions.36
Internet platforms also have the ability to procure AI tools and services, but are largely free to formulate their own bidding processes, which usually results in much less transparency in private procurement practices.37 Since the private procurement process is much less regulated than that of public procurement, it is a much less effective opportunity to promote FAT around algorithmic systems through procurement practices at scale.
Citations
- European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, February 19, 2020, source.
- European Commission, White Paper.
- European Commission, White Paper.
- European Parliamentary Research Service, A Governance.
- European Parliamentary Research Service, A Governance.
- Engstrom et al., Government by Algorithm.
- European Parliamentary Research Service, A Governance.
- Engstrom et al., Government by Algorithm.
- Andrew Tutt, "An FDA for Algorithms," Administrative Law Review 69, no. 83 (March 2016): source.
- European Parliamentary Research Service, A Governance.
- Matthew U. Scherer, "Regulating Artificial Intelligence Systems: Risks, Challenges, Competencies, and Strategies," Harvard Journal of Law & Technology 29, no. 2 (Spring 2016): source.
- European Commission, White Paper.
- In the EU, this approach would also require coordination with relevant data protection authorities.
- European Parliamentary Research Service, A Governance.
- European Parliamentary Research Service, A Governance.
- "Solar Investment Tax Credit (ITC)," Solar Energy Industries Association, source. Congressional Research Service, The Renewable Electricity Production Tax Credit: In Brief, April 29, 2020, source.
- European Parliamentary Research Service, A Governance.
- “Germany: Flawed Social Media Law,” Human Rights Watch, February 14, 2018, source.
- European Commission, Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts, April 4, 2021, source.
- European Commission, Proposal for a Regulation.
- Spandana Singh, "The EU's Digital Services Act Makes a Positive Step Towards Transparency and Accountability, But Also Raises Some Serious Questions," New America's Open Technology Institute, last modified January 21, 2021, source.
- Spandana Singh, "Breaking Down the World's First Proposal for Regulating Artificial Intelligence," New America's Open Technology Institute, last modified June 10, 2021, source.
- Exec. Order No. 13859 Fed. Reg. (Feb. 14, 2019). source.
- Exec. Order No. 13859
- "AI Standards: Federal Engagement," National Institute of Standards and Technology, March 14, 2019, source.
- Exec. Order No. 13960 Fed. Reg. (Dec. 8, 2020). source.
- Exec. Order No. 13960
- Bannan and Blase, Automated Intrusion.
- Bannan and Blase, Automated Intrusion.
- Leila Doty and Lauren Sarkesian, "To Ensure More Trustworthy AI, Use an Old Government Tool: Public Procurement," Issues in Science and Technology, February 9, 2021. source.
- “A Snapshot of Government-wide Contracting for FY 2019 (infographic),” U.S. Government Accountability Office Watchblog, May 26, 2020, source
- “Government Procurement: What is Government Procurement?,” FindRFP, source. Congressional Research Service, Defense Primer: Lowest Price Technically Acceptable Contracts, January 22, 2021, source.
- Doty and Sarkesian, "To Ensure".
- “Policy and Investment Recommendations for Trustworthy Artificial Intelligence,” European Commission, June 26, 2019 source.
- “Grip on Algorithms,” Township Amsterdam, source
- Engstrom et al., Government by Algorithm.
- For instance, private companies may withhold information that is not necessary to the bidding suppliers, they may choose which vendors they request proposals from, and they are not required to publish their contract awards in the same way that public entities are required."Private Vs. Public Sector Procurement Practices," Concord, last modified April 3, 2019, source. "Private vs. Public Sector Bidding Process," Handex, last modified January 8, 2019, source.