For Marginalized Communities, the Stakes are High
The idea of privacy as it is often discussed and understood has little value when it excludes certain communities. For hundreds of years, many people in the United States have not been granted true anonymity and autonomy.1 Lantern laws in the eighteenth century dictated that Black, mixed-race, and Indigenous enslaved people had to carry candle lanterns with them after sunset.2 Many enslaved people were tracked and documented meticulously, and slave branding was used as a precursor to modern biometric ID.3
Today, many people still lack true anonymity and autonomy. Most public transportation is equipped with surveillance devices,4 and individuals have to give up a certain amount of data to participate in social safety net programs.5 While these trends impact more white people in terms of sheer numbers, Black and Indigenous people are disproportionately impacted per capita.6 In any number of ways, details about your life that may seem innocuous become a handful of carefully chosen factors that can determine whether you are more likely to be surveilled by government or corporations—details like whether you use an Android phone or iPhone, attend public or private school, are incarcerated or in contact with a loved one who is incarcerated, or live in a neighborhood where the police are overly present. These details are also factors that are inextricably and tacitly linked to race, class, and cultural identity.
While these trends impact more white people in terms of sheer numbers, Black and Indigenous people are disproportionately impacted per capita.
Without bright-line, twenty-first century civil rights protections, the discriminatory data practices adopted by online companies will continue to have long-lasting, severe consequences. They disproportionately harm people of color—especially Black and Brown communities—women, immigrants, religious minorities, members of the LGBTQ+ community, low-income individuals, and other marginalized communities. Centering these communities in this work helps us understand exactly how high the stakes are, and underscores the need for solutions that directly mitigate these harms.
Our panel discussion covered five particular areas that exemplified the high stakes: employment discrimination, housing discrimination, increased surveillance, socioeconomic inequalities, and personal safety.
Data Practices Can Facilitate Employment Discrimination
Companies can use data in ways that facilitate gender- and race-based employment discrimination.7 Many hiring processes have moved online; not only do companies list ads for job openings online, but humans or computers (through algorithms built and programmed by humans) increasingly decide who sees the openings.8 Algorithms may also screen job applicants’ resumes, analyze job interviews, and more.9 Employers are now trying to leverage new hiring tools with the stated goals of creating more efficiencies and improved personalization, such as reducing the cost per hire and optimizing the quality of hire.10 However, is “personalization” actually just “discrimination,” as Bogen argued? As companies attempt to maximize efficiency and personalization in the hiring process, it may be that they are actually just discriminating against people of color, women, and other underrepresented groups.
Advertising platforms, for instance, can enable employment discrimination by choosing who sees particular job ads, which has serious implications for job opportunity. When recruiters advertise positions, they often pay for digital advertising on large online platforms to maximize their reach and effectiveness.11 These platforms in turn leverage the troves of user data they have amassed and give employers the ability to dictate audience parameters to target ad delivery. Facebook, for instance, used to allow targeting based on demographic data, and still allows targeting based on categories that may be highly correlated with race and other protected classes, such as inferred interests.12 Though the employers set initial targeting parameters, the ad platforms may ultimately determine who within a target audience sees an ad based on their own prediction of how likely a user is to engage with the ad or apply for the position.13
Online employment ads have been delivered in discriminatory ways. Numerous studies have established that Facebook’s ad delivery algorithm has discriminated based on race and gender. In response, the platform has adopted several policies to avoid discrimination for certain types of ads, including eliminating the option to allow age-, gender-, or zip code-based targeting.14 In 2018, ProPublica found that Facebook allowed employers to advertise jobs selectively to users of specific genders.15 Fifteen employers, including Uber, advertised jobs exclusively to men, and a community health center in Idaho advertised job openings for nurses and medical assistants only to women.16 Another recent study found that this discrimination persists in job listings even when advertisers do not target specific demographics and are trying to reach a broad audience.17 Advertisers can also still exclude users based on interests that are highly correlated with race by using custom audiences or location.18 As Bogen explained on the panel, “All the data still exists in the infrastructure, they just took away some of the tools [used for explicit discrimination], but those algorithms look at historical information being pushed forward into the future.”19
Predictive technologies also discriminate in hiring. Increasingly, predictive technologies that rely on data are used in recruiting, screening, interviewing, and selecting candidates. Without clear transparency and oversight, these technologies can lead to bias, thereby undermining equity in hiring.20 In particular, Bogen highlighted that discrimination can occur as recruiters attempt to leverage the power of data toward streamlining candidate screening: “Amazon’s racist hiring AI was saying women were probably not good candidates because they didn’t resemble the candidates that the company had seen in the past. These are tangible outcomes that weren’t in the conversation before that are now becoming more clear.”21 These outcomes, according to Bogen, make it obvious that machines, too, can be racist as they are often a reflection of society and can perpetuate and exacerbate biases.22 Yet, the public lacks transparency into most of these practices.
Data Practices Can Facilitate Housing Discrimination
Access to housing has been recognized as a basic human right, one that is a necessary condition for other economic, social, and cultural rights. The process for securing housing has moved increasingly online. Whereas, home buyers in the 1980s typically perused newspaper ads to find a home, 44 percent of home buyers in 2016 looked for properties online first.23 This process is also vulnerable to discrimination.
Housing discrimination is not new. The Fair Housing Act prohibits certain types of housing discrimination. But online means of housing discrimination have supplemented offline means. Historically, Shields relayed on the panel, marginalized communities have engaged with the legal system to gain rights to material conditions like housing. “As our lives have become increasingly digitized,” she said, “discrimination from before has mutated… [An] example could be racial covenants and redlining in housing. This has mutated into Facebook allowing housing providers to select racial categories of who’s seeing ads for housing.”24
Targeting online housing ads is already a civil rights issue. In 2016, ProPublica found that Facebook’s platform allowed advertisers to exclude Black, Hispanic, and other groups called “ethnic affinities” from seeing housing ads.25 A recent study found that, in the same way that gender- and race-based discrimination persists in job listings, discrimination occurs in the targeting of housing ads even when advertisers do not opt to target specific demographics.26 Further, advertisers can still discriminate by selecting audience parameters that are highly correlated with race, such as inferred interests or location.27
Facebook’s platform allowed advertisers to exclude Black, Hispanic, and other groups called “ethnic affinities” from seeing housing ads.
Secondary, nonconsensual data collection on these large platforms can also lead to further housing discrimination, with some companies collecting personal information without clearly disclosing what is being collected and what it might be used for. This secondary use of data, in previous cases, has allowed companies to repurpose user data, without consent, to build tools that make segregation worse. As Laroia explained on the panel, “It is fundamentally immoral that companies are using information about your ordinary life online, and that information is being turned around and used to build tools that facilitate segregation—to fundamentally undo the important rights and progress we’ve made to build a more equitable society.”28 Facebook, for instance, originally collected users’ phone numbers for two-factor authentication, but also used that information to deliver targeted advertising.29 Using phone numbers, one can find a wide range of personal information, including names, education and career histories, and locations—all of which can be used as a proxy for facilitating housing discrimination through targeted ad delivery.30
Data Practices Facilitate Increased Surveillance of Marginalized Communities, Especially Communities of Color
Digital surveillance disproportionately violates the privacy of those communities already marginalized and unreasonably suspected. The history of surveillance of marginalized communities in the United States dates back to colonial times. As media activist Malkia Cyril said at the 2017 Color of Surveillance conference, “Surveillance technologies have been used to separate the citizen from the slave, to protect the citizen from the slave.”31 Records of slaves in plantation ledgers served as proto-biometric databases and, combined with slave passes, slave patrols, and fugitive slave posters, were the precursors to modern policing and tracking.32 New technology and commercial data practices have continued this legacy of surveillance with a disparate impact on marginalized communities, especially immigrants and communities of color. For instance, commercial databases have been accessed by government surveillance programs and law enforcement agencies, without a warrant or probable cause, and were used to target people of color.33 Social media data can also be used for racial profiling.34
Companies are utilizing commercial data practices that rely on privacy intrusions, leading to increased surveillance. Mijente, which has historically focused on immigrant rights, learned that Immigration and Customs Enforcement (ICE) used Palantir’s services to track and detain undocumented immigrants.35 ICE built profiles of immigrant children and their family members, logging relatives and guardians who showed up to claim unaccompanied minors in Palantir’s investigative case management system.36 According to González, federal agencies and local law enforcement across the country were able to share surveillance data to locate, detain, and deport immigrants using Palantir’s software, thereby circumventing sanctuary policies that prohibit cooperation between ICE and local police in certain cities, and exacerbating the family separation crisis. As González noted, “Immigrants, people of color, vulnerable communities usually serve as laboratories for testing new technologies,” as companies leverage new ways of commodifying data, often without regard for civil and human rights.37
There is a history of using personal information to surveil minorities in the United States.38 In the aftermath of the September 11 attacks, the Bush administration enacted the National Security Entry-Exit Registration System (NSEERS) to register non-citizen visa holders, primarily from Muslim-majority countries.39 While this system is now defunct, new practices have cropped up that continue to subject members of marginalized communities to undue surveillance. In June 2019, the U.S. State Department began collecting and reviewing all social media accounts and identities of people entering the country.40 This practice “only further illustrates the extent to which social media is now being weaponized against immigrant and non-immigrant minority populations.”41 Individuals’ personal data feeds into database screening and watchlists that determine who can work, vote, fly, and more—which in turn can enable discrimination and profiling, bearing significant risks to the civil rights and liberties of marginalized communities.42
Data practices also disproportionately harm Black and Brown communities. Shields noted that there have been long-standing movements to acknowledge the discriminatory impact of data with Black- and Brown-led research, noting examples like Our Data Bodies and the Stop LAPD Spying Coalition. Our Data Bodies (ODB) has exposed how conviction and incarceration data on individuals can be used as a barrier to employment, services, and housing, and its disproportionate effect on Black residents.43 The Stop LAPD Spying Coalition has publicized surveillance techniques and resources used by the Los Angeles Police Department to discriminate against people of color, including how the department’s use of predictive policing has criminalized spaces in Los Angeles mostly occupied by Brown and Black residents.44
Exploitation of Personal Data Perpetuates Socioeconomic Inequity
Discriminatory data practices also perpetuate, and even escalate, socioeconomic inequities. Companies may use information about people in ways that entrench existing wealth disparities, such as using data on bill payments and credit scores to reject requests for bank loans.45
Exploitation of data in these ways leads to socioeconomic harm for specific groups of people who have historically been subject to discrimination. Ochillo gave the example of companies using data on customers’ income levels and zip codes to “personalize” prices and products for people living in specific neighborhoods. A person living in a wealthier neighborhood may have access to better prices or products than someone living in a poorer neighborhood. As Ochillo said, “Data that is originally collected with good intentions can easily be repurposed to discriminate or over-police in communities of color.”46 Surveillance and violations of privacy also perpetuate digital inequality and the socioeconomic divide. Valentin pointed to Mary Madden’s writings on how poor people “experience these two extremes: hypervisibility and invisibility,”47 lacking both the agency and resources to challenge undue harms.48 Low-income individuals are subject to greater suspicion and monitoring when applying for government benefits, and live in heavily policed neighborhoods. They can be unfairly targeted by predictive policing tools. At the same time, if low-income individuals are not visible enough online, they can also lose out on education and job opportunities. Low-income communities are also most likely to suffer from surveillance and violations of their privacy and civil liberties, as low-income internet users are significantly less likely than those in high-income households to use privacy settings to limit who can see what they post online (57 percent vs. 67 percent).49
“Data that is originally collected with good intentions can easily be repurposed to discriminate or over-police in communities of color.”
Protecting Personal Data Protects Personal Safety
The collection and misuse of personal information can cause physical and emotional harm. Data is directly linked to personal safety, especially when that data is used to identify people or determine their location. In particular, data practices can facilitate and locate targets for hate campaigns, physically endangering individuals’ lives.
Violent hate groups use online platforms and services to target specific populations. “Data manipulation techniques have been used to…stir hate and division within communities and against religious minorities,” Ochillo pointed out. Civil rights groups like Muslim Advocates have called on social media platforms to rein in the amplification of white supremacist hate groups.50 By using search engine optimization strategies to exploit “data voids,” or search terms that lack robust results, bad actors can “hijack” certain issues and reach potential new audiences.51 As this rhetoric spreads online, it can lead to real-life violence, as several incidents have demonstrated. For instance, Robert Bowers, who murdered worshippers at a Pennsylvania synagogue in October 2018, was active on Gab, a Twitter-like platform used by white supremacists.52 And a recent study led by New York University found that online hate speech on Twitter predicts real-life racial violence.53
In addition, there are significant risks to personal safety associated with the sharing of individuals’ location data. Motherboard published an extensive report earlier this year on how major telecom companies like AT&T, Sprint, and T-Mobile sold access to their customers’ real-time location data to location aggregators, who then resold the data to law enforcement agencies, car salesmen, property managers, bail bondsmen, and bounty hunters.54 This data likely included “assisted GPS” data, which is used by first responders to locate 911 callers in emergency situations and can specify a person’s location inside a building. The precise location data was allegedly used by two bounty hunters who tracked a man on the run from a first degree drug charge in Minnesota to a car dealership in Texas.55 All three men died in the ensuing shootout, which also endangered other customers at the dealership.56
The civil rights implications of these data and surveillance tactics are many. Such data could be used to track undocumented immigrants and their families. It may be used to identify groups of people or communities and expose their meeting locations, information that could then be used to allow hate crimes and identity-based violence. For instance, Grindr, a popular LGBTQ+ dating app, disclosed the HIV status, GPS location, email addresses, and other profile information from its users to third parties without user consent.57 It is paramount, therefore, to protect members of marginalized communities in a way that accounts for the greater risk to their civil rights that privacy violations pose.
Citations
- See, e.g., Alvaro M. Bedoya, “The Color of Surveillance,” Slate, January 18, 2016, source
- Claudia Garcia-Rojas, “The Surveillance of Blackness: From the Trans-Atlantic Slave Trade to Contemporary Surveillance Technologies,” Truthout, March 3, 2016, source.
- Simone Browne, “Digital Epidermalization: Race, Identity and Biometrics,” Critical Sociology 36 (2009), source
- Lindsey Mancini, Andrea Soehnchen, Phillip Soehnchen, Patrik Anderson, Johan Wallén, Video Surveillance in Public Transport, (International Association of Public Transport, November 2015), source
- Emma Coleman and Myacah Sampson, “The Need to Regulate AI Implementation in Public Assistance Programs,” The Ethical Machine: Big Ideas for Designing Fairer AI and Algorithms, (Cambridge, MA: Shorenstein Center, April 12, 2019), source.
- Michele Gilman and Rebecca Green, “The Surveillance Gap: The Harms of Extreme Privacy and Data Marginalization,” NYU Review of Law and Social Change 42 (2018): 253-307, source
- See, e.g., Miranda Bogen and Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, (Washington, DC: Upturn, December 2018), source.
- Bogen and Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias.
- See, e.g., Aaron Smith, “Public Attitudes Toward Computer Algorithms,” Pew Research Center, November 16, 2018, source.
- Erica Volini et al., Leading the social enterprise: Reinvent with a human focus, (Deloitte Insights, 2019), source
- See, e.g., “Don’t Post and Pray—Control Your Job Posting Results,” Recruiting.com, source.
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,” Computers and Society, (April 19, 2019), source.
- Miranda Bogen and Aaron Rieke, Help Wanted: An Examination of Hiring Algorithms, Equity, and Bias, Upturn,(Washington, D.C., December 2018), source.
- Sheryl Sandberg, “Doing More to Protect Against Discrimination in Housing, Employment and Credit Advertising,” Facebook, March 19, 2019, source.
- Ariana Tobin and Jeremy B. Merill, “Facebook is Letting Job Advertisers Target Only Men,” ProPublica, September 18, 2018, source.
- Tobin and Merill, “Facebook is Letting Job Advertisers Target Only Men.”
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,” Computers and Society, (April 19, 2019), source
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes.”
- Francella Ochillo, Gaurav Laroia, Erin Shields, Miranda Bogen, Alisa Valentin, Priscilla Gonzalez, Brandi Collins-Dexter, “Centering Civil Rights in the Privacy Debate,” (Panel, Washington, DC, May 9, 2019), source.
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,” Computers and Society, (April 19, 2019), source.
- Rachel Goodman, “Why Amazon’s Automated Hiring Tool Discriminated Against Women,” American Civil Liberties Union, October 12, 2018, source. In 2014, Amazon started a project to automate internal hiring with the goal of building an algorithm that could review resumes and select candidates. The project was terminated when staff realized that the tool systematically discriminated against women applying for technical jobs like software engineer positions.
- For more on the diversity crisis in the AI sector, see Sarah Meyers West, Meredith Whittaker, and Kate Crawford, Discriminating Systems: Gender, Race, and Power in AI, AI Now Institute (April 2019), source.
- Jessica Lautz, Meredith Dunn, Brandi Snowden, Amanda Riggs, and Brian Horowitz, Real Estate in a Digital Age 2017 Report, National Association of Realtors, (2017), source.
- Francella Ochillo, Gaurav Laroia, Erin Shields, Miranda Bogen, Alisa Valentin, Priscilla Gonzalez, Brandi Collins-Dexter, “Centering Civil Rights in the Privacy Debate,” (Panel, Washington, DC, May 9, 2019), source.
- Julia Angwin and Terry Parris Jr., “Facebook Lets Advertisers Exclude Users by Race,” ProPublica, October 28, 2016, source
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes,” Computers and Society, (April 19, 2019), source
- Muhammad Ali et al., “Discrimination through optimization: How Facebook’s ad delivery can lead to skewed outcomes.”
- Francella Ochillo, Gaurav Laroia, Erin Shields, Miranda Bogen, Alisa Valentin, Priscilla Gonzalez, Brandi Collins-Dexter, “Centering Civil Rights in the Privacy Debate,” (Panel, Washington, DC, May 9, 2019), source.
- Natasha Lomas, “Yes Facebook is using your 2FA phone number to target you with ads,” TechCrunch, September 2018, source; The Federal Trade Commission recently prohibited Facebook from engaging in that practice. United States of America v. Facebook, Inc., July 24, 2019, D.D.C. 7, source. Facebook has also used artificial intelligence to learn more about users’ hobbies, preferences, and interests by mining users’ photos to train its facial recognition software toward the goal of creating new platforms to place more focused targeted ads, which the FTC also recently prohibited. Jared Bennett, “Facebook: Your Face Belongs to Us,” Daily Beast, July 31, 2017, source.
- Steven Petrow, “With my cell-phone number, a private eye found 150 pages on me,” USA Today, July 24, 2017, source.
- Malkia Cyril, “We Are the Color of Freedom: To Win Migrant Rights, Demand Digital Sanctuary,” presentation at Color of Surveillance Conference, Georgetown Law Center, June 22, 2017, source.
- Barton Gellman and Sam Adler-Bell, The Disparate Impact of Surveillance, (The Century Foundation, December 21, 2017), source
- See, e.g., Eli Rosenberg, “Motel 6 will pay $12 million to guests whose personal data was shared with ICE,” Washington Post, April 8, 2019, source.
- Koustubh “K.J.” Bagchi, “Privacy in the Digital World: Breaches Underscore Need for Federal Action,” Medium, July 3, 2018, source.
- While Amazon does not directly contract with ICE, the company has been heavily criticized for providing critical support to companies like Palantir through its Amazon Web Services cloud storage platform. Rachel Sandler, “Internal Email: Amazon Faces Pressure From More Than 500 Employees to Cut Ties with Palantir for Working with ICE,” Forbes, July 11, 2019, source.
- Manish Singh, “Palantir’s software was used for deportations, documents show,” TechCrunch, May 2019, source.
- Francella Ochillo, Gaurav Laroia, Erin Shields, Miranda Bogen, Alisa Valentin, Priscilla Gonzalez, Brandi Collins-Dexter, “Centering Civil Rights in the Privacy Debate,” (Panel, Washington, DC, May 9, 2019), source.
- Koustubh “K.J.” Bagchi, “Privacy in the Digital World: Breaches Underscore Need for Federal Action,” Medium, July 3, 2018, source.
- Nadeem Muadi, “The Bush-era Muslim registry failed. Yet the U.S. could be trying it again,” CNN, December 22, 2016, source.
- Sandra E. Garcia, “U.S. Requiring Social Media Information from Visa Applicants,” New York Times, June 2, 2019, source.
- Koustubh “K.J.” Bagchi, “Privacy in the Digital World: Breaches Underscore Need for Federal Action,” Medium, July 3, 2018, source.
- Margaret Hu, “Big Data Blacklisting,” Florida Law Review 67 (March 2016): 1735-1809. source.
- ”Charlotte,” Our Data Bodies, source. In a state like North Carolina, this data practice has significant implications for racial equity, since Black residents are five times more likely than white counterparts to be incarcerated.
- Stop LAPD Spying Coalition, Before the Bullet Hits the Body. (Los Angeles, CA: May 8, 2018).source.
- Mikella Hurley and Julius Adebayo. “Credit Scoring in the Era of Big Data,” Yale Journal of Law and Technology (2017). Vol. 18 (1), Article 5. source.
- Francella Ochillo, Gaurav Laroia, Erin Shields, Miranda Bogen, Alisa Valentin, Priscilla Gonzalez, Brandi Collins-Dexter, “Centering Civil Rights in the Privacy Debate,” (Panel, Washington, DC, May 9, 2019), source.
- Ochillo, Laroia, Shields, Bogen, Valentin, Gonzalez, Collins-Dexter, “Centering Civil Rights in the Privacy Debate.”
- Mary Madden, “The Devastating Consequences of Being Poor in the Digital Age,” New York Times, April 25, 2019,source.
- Mary Madden, Privacy, Security, and Digital Inequality, (New York, NY: Data & Society, September 27, 2017), source. Communities of color also suffer from lack of education and training with digital tools, leaving them vulnerable to breaches in privacy and attacks on their data, as well as online scams and fraud.
- “Civil Rights Groups Call on Social Media Platforms to Better Address Hate Violence and Groups,” Muslim Advocates, February 22, 2018, source.
- Michael Golebiewski and Dana Boyd, Data Voids: Where Missing Data Can Easily Be Exploited, (New York, NY: Data & Society, May 2018), source and Rebecca Lewis, Alternative Influence: Broadcasting the Reactionary Right on Youtube, (New York, NY: Data & Society, September 2018), source.
- Rachel Hatzipanagos, “How Online Hate Turns into Real-Life Violence,” Washington Post, November 30, 2018, source.
- Kunal Relia, Zhengyi Li, Stephanie H. Cook, and Rumi Chunara, “Race, Ethnicity and National Origin-based Discrimination in Social Media and Hate Crimes Across 100 U.S. Cities,” Computers and Society, (Submitted January 31, 2019 for peer review), source.
- Joseph Cox, “I Gave a Bounty Hunter $300. Then He Located Our Phone,” Motherboard Tech by Vice, January 8, 2019, source.
- Joseph Cox, “Black Market T-Mobile Location Data Tied to Spot of a Triple Murder,” Motherboard Tech by Vice, June 26, 2019, source.
- Joseph Cox, “Black Market T-Mobile Location Data Tied to Spot of a Triple Murder.”
- Alison Bateman-House, “Why Grindr’s Privacy Breach Matters to Everyone,” Forbes, April 10, 2018, source.