Sept. 17, 2020
In June, amid a movement for Black lives that would rank among the largest civil rights protests in history, tech companies, like so many American institutions, faced a reckoning over racial equity. And their responses were revealing.
Tech companies had, even very recently, lobbied hard against bans on facial recognition technology, but their tone quickly changed in response to the widespread protests. IBM was first to announce that they would no longer offer or develop facial recognition technology, writing to Congress that they would oppose uses “of any technology… for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values…”. Not long after, Microsoft and Amazon followed suit, announcing that they would not sell the technology to state and local law enforcement. Regardless of whether these companies’ motives were public relations-focused or genuinely aimed at mitigating racial inequality, these moves should have been a wake-up call. Instead, Congress has remained largely silent on facial recognition technology.
Numerous studies have shown that facial recognition technology contains alarming inaccuracies, particularly on certain groups, including women and people with darker skin. In fact, multiple cases of facial recognition misidentifications leading to wrongful police action have come to light in the past few months—all involving Black men. Robert Williams and Michael Oliver were both wrongfully arrested and imprisoned by Detroit police based on facial recognition technology mismatches, adding to the already compelling evidence that the tech is biased and dangerous in the hands of law enforcement. Detroit’s police chief has even acknowledged how overwhelmingly the software misidentifies. Yet, police continue to use it.
Significantly, the technology is dangerous even when accurate. In our imperfect nation, surveillance technologies including facial recognition are disproportionately deployed in Black and Brown communities, by law enforcement agencies who may be biased or misusing the tech, rendering these communities much more vulnerable to the technology’s harms.
Further, the faces of at least half of all U.S. adults are already included in police facial-recognition databases, resulting in a “perpetual line-up” and perpetual suspicion from police. And, thanks to state DMVs who have been sharing our driver’s license photos with the FBI, and companies like ClearviewAI that scrape our photos off social media to create databases for their police clients, police are amassing even larger sets of data. That means every time even small crimes happen in our neighborhoods, nearby camera footage—however unreliable—may be tested against our old Facebook or driver’s license photos for algorithms to deem us suspects.
Police use of facial recognition technology also invades privacy through its omnipresence, and chills speech. Law enforcement has a long history of surveilling civil rights protests. And facial recognition technology, one of the most powerful surveillance tools imaginable, can identify thousands of protestors from a single CCTV camera. So while such surveillance is not new, tracking of this scale is, and undermines a foundational principle of our democracy—our right to free speech. For example, recent reports confirm that police in many jurisdictions have been using facial recognition technology to monitor Black Lives Matter protesters, even as tech companies decry its use and halt sales.
The faces of at least half of all U.S. adults are already included in police facial-recognition databases, resulting in a “perpetual line-up” and perpetual suspicion from police.
A handful of members of Congress have taken a serious look at the technology and offered meaningful legislation. Most notably, Senators Markey (D-Mass.) and Merkley (D-Ore.), and Representatives Jayapal (D-Wash.) and Pressley (D-Mass.) introduced the Facial Recognition and Biometric Technology Moratorium Act this summer. That bill would halt government use of facial recognition and other biometric surveillance tools at the federal level, and ban federal funds from being used by state and local law enforcement to purchase such tech. However, this and other narrower facial recognition bills have gained little traction and are unlikely to receive consideration in this Congress, for a number of reasons, not the least of which is that any bill attempting to rein in police power is extremely politically fraught.
With Congress unwilling or unable to act on the issue, states and localities are fortunately stepping up. In May 2019, San Francisco became the first city to ban facial recognition use by police and other agencies. Thirteen other cities across California, Massachusetts, and Maine have since banned government use of the technology. And, just last week, Portland, Oregon became the first city to ban both government and commercial use of facial recognition—the strongest regulation of the technology yet.
Other local efforts have also yielded better results in reining in police surveillance technologies writ large through campaigns for local transparency and oversight laws. Many advocates, including New America’s Open Technology Institute, are pushing for Community Control Over Police Surveillance (CCOPS) laws, which would establish democratic processes surrounding the acquisition and use of surveillance tech. While, generally, these ordinances would not ban facial recognition or other technologies outright, they would require transparency into what police technologies are in use, allow opportunities for community input before deployment, and offer strong oversight mechanisms to the local legislature and community. In a crucial win for advocates, New York City recently became the fourteenth city to enact such a law. Similar ordinances have been put in place in Seattle, Nashville, Madison, San Francisco, and nine other cities nationwide. OTI is working with a coalition of advocates to pass similar legislation in Washington D.C., where police quietly use facial recognition and a plethora of other surveillance technologies.
The evidence that facial recognition technology impedes on our rights has never been more clear. We have numerous examples of wrongful police action based on misidentifications. The largest companies affiliated with the technology have acknowledged that it perpetuates racial injustice, and have basically invited Congress to regulate them. Now should be the moment for national policymakers to rein in law enforcement use of this technology. Yet, with Congress tied up with pandemic aid and elections, the only near-term hope for action is at the state and local levels, where efforts to scale back police technologies have proven most effective. Whether through community oversight ordinances, or narrower efforts to address individual technologies, local efforts offer hope for more accountability and transparency nationwide, one city at a time.