In the Absence of Federal Regulation, State and Local Movements are Pushing for Algorithmic Accountability

Blog Post
New America / Visual Generation on Shutterstock
June 8, 2022

Earlier this month, the Equal Employment Opportunity Commission (EEOC) and the U.S. Justice Department’s Civil Rights Division (DoJ) warned employers that artificial intelligence and other technologies used during the hiring process can generate discriminatory outcomes for individuals with disabilities. Over the past few years, policymakers around the globe, like the EEOC and DoJ have recognized the potential discriminatory and harmful outcomes that can result from the use of certain algorithmic systems, including those used to target online ads, surveil individuals, and approve mortgages. In response, some federal lawmakers in the United States have introduced a slate of bills seeking to rein in algorithmic systems deployed by corporations and the government by promoting fairness, accountability, and transparency around their development and use. Some of the most promising of these proposals address critical issues including privacy, transparency, the need for independent evaluations of algorithms, and bias and discrimination. However, none of these bills have gone far in the legislative process.

More recently, lawmakers at the local and state level have attempted to address the need for algorithmic accountability and fill the gap left by federal inaction. Additionally, numerous local task forces have emerged, seeking to study and provide recommendations for companies and governments using AI at the local level. While federal regulation related to algorithmic systems should be a priority for lawmakers, these local bills and efforts are valuable mechanisms for safeguarding consumers and citizens in the short term. Further, while compliance with a patchwork of laws and recommendations will likely be challenging for companies and government entities, these localized efforts can also provide federal lawmakers with valuable lessons on what regulatory approaches work best.

Washington, D.C.: The Stop Discrimination by Algorithms Act of 2021

In December 2021, D.C. Attorney General Karl A. Racine introduced the Stop Discrimination by Algorithms Act of 2021 (SDAA) in the Council of the District of Columbia. The SDAA is the first comprehensive bill of its kind across the country, and OTI has endorsed the legislation. The bill would strengthen civil rights protections for residents of the District of Columbia by prohibiting companies and institutions from using algorithms that generate biased or discriminatory results, and that prevent access to critical opportunities such as employment, insurance, and credit. The bill would also enhance transparency around how consumer data is collected and used and why an algorithm made a specific decision.

Under the bill, it would be illegal for businesses and organizations to use harmful algorithms in four areas of life: education, employment, housing, and public accommodations and services (including credit, health care, insurance, and more). The bill would also require companies to audit their algorithms for discrimination once a year and document the process they engaged in when developing their algorithms. Overall, the SDAA will help tackle bias and discrimination in algorithmic systems in the District by encouraging greater evaluation of these systems, promoting transparency and user rights, and ensuring strong enforcement action against violating entities.

Vermont: An Act Relating to the Use and Oversight of Artificial Intelligence in State Government

In early 2021, Vermont state legislators introduced the Act Relating to the Use and Oversight of Artificial Intelligence in State Government, which focuses on oversight of the state’s use of automated decision making systems. The Act would require the state to create an inventory of all automated decision-making systems it develops and uses, including via public procurement processes. This inventory must include information on the data points systems rely on to make decisions, whether a system should be permitted to carry out independent decision making, and whether and how systems have been evaluated.

The Act would also establish a commission tasked with proposing a state code of ethics for AI that will be updated annually. The commission will also be responsible for providing guidance on when certain automated decision making systems can be deployed, whether human oversight over decisions is necessary, and how an impacted individual can appeal a system’s decisions. The Commission will also provide state policymakers with recommendations on laws and policies related to artificial intelligence. Overall, the bill incorporates important elements such as notice, human oversight, explainability, and appeals which will help promote greater transparency and accountability around Vermont state’s use of AI. Vermont lawmakers are currently discussing amendments to the Bill, but it has passed the House and Senate.

In 2018, New York City Mayor Bill de Blasio announced the creation of an Automated Decision System Task Force, after the local council voted to study whether the algorithms the city uses to make decisions result in biased outcomes. The task force, comprising 16 members, issued its final report in November 2019, resulting in the creation of a new City Hall position responsible for setting policies on the use of algorithms. In December 2021, New York City passed a law prohibiting businesses from using artificial intelligence or AI-based tools to make hiring decisions about New York City residents without first auditing those tools for biases within the last one year.

However, the bill has some concerning flaws. First, it only requires employers to conduct bias audits to identify disparate impact with regard to race, sex, and ethnicity. Employers do not have to audit for disparate impact related to disability, age, sexual orientation, gender identity and other protected characteristics. This leaves room for discrimination against certain categories of workers. Additionally, the final version of the bill only targets bias and discrimination during the hiring and promotion segments of the employment lifecycle. It does not touch on critical areas such as compensation and working conditions. Further, the bill also lacks adequate notice provisions and strong enforcement mechanisms.

Other Local Efforts

Over the past several years, many municipalities across the country have instituted bans on the use of facial recognition. San Francisco became the first city to do so in May 2019, banning law enforcement and other city agencies from using the technology. Since then, several other municipalities have followed suit, including Berkeley, California in October 2019, Portland, Oregon in September 2020, and Baltimore, Maryland with a ban on private sector use of facial recognition in September 2021. Currently, there are approximately 15 local-level bans on facial recognition in place in the United States.

In the short-term, state and local efforts to promote algorithmic fairness, accountability, and transparency serve as valuable mechanisms for protecting individuals from potentially harmful systems and placing critical responsibilities on corporate and government developers and deployers of these systems. Because of this, it is critical to monitor where and how these laws are being implemented. However, in the long-run, having a patchwork of different regulations will make both compliance with and enforcement of these laws difficult, leaving many individuals vulnerable to the harmful impacts of biased algorithmic systems. Because of this, lawmakers must pass meaningful legislation related to algorithmic accountability.

Related Topics
Algorithmic Decision-Making Platform Accountability