Centering Civil Rights in the Privacy Debate

Weekly Article
Lightspring / Shutterstock.com
May 16, 2019

Can Congress prevent the disproportionate harm inflicted on marginalized communities from at times irresponsible commercial data practices?

The internet provides access to innumerable goods and services that many people rely on to survive, including jobs, housing, and general information. But nothing is free: In exchange for this access, companies often collect and process endless amounts of data on individuals. More than that, people have come to learn the hard way, primarily via data breaches, that processing so much data presents real privacy pitfalls to individuals—ranging from identity theft and lost income, to more intangible harms like damaged reputation and emotional distress, to other inconveniences like having to spend time changing passwords and updating privacy settings.

People who are part of marginalized communities are especially vulnerable to exploitative commercial data practices. This often results in data collection that fuels discrimination, including restricted access to housing and employment. It also opens up avenues through which these communities can be targets of voter suppression and hate campaigns.

Put another way, as our lives increasingly shift online, so, too, have methods of discrimination—using individual data profiles—and our laws have been slow to keep up. It’s thus vital for Congress, in particular, to conceptualize privacy as a civil right, and move the privacy debate to prioritize perspectives from marginalized communities who are disproportionately harmed by these new dangers.

Indeed, privacy and civil rights ought to go hand in hand, but they often don’t. In fact, civil rights discussions have largely been missing from the privacy debate. This is despite the reality that privacy infringements affect civil rights in myriad ways.

Erin Shields, the national field organizer for internet rights at the Center for Media Justice, explained some of these ways at a recent event on “Centering Civil Rights in the Privacy Debate,” hosted by New America’s Open Technology Institute and Color of Change. The “old regime of oppression and discrimination,” she said, has been “updated and compounded by algorithms and the ability of corporations, third-party data brokers, social media, and the government to collect an extreme amount of information on us and also to guess about us, and deliver us services or not deliver us services based on those guesses.”

Further, manipulated data can harm communities of color and low-income communities through over-policing, voter suppression, hate campaigns, online fraud, predatory schemes, digital redlining, and more. As Francella Ochillo, vice president of policy and general counsel at the National Hispanic Media Coalition, explained, it’s critical that “privacy laws acknowledge the enduring economic, political, and cultural oppression that still exists in this country to this day.”

For immigrant populations, there’s also a crucial link between privacy and surveillance. Mijente, an organization that has historically focused on immigrant rights and that was also represented at the event, has shed light on the fact that the tech industry has played a key role in accelerating the targeting, surveillance, deportation, and detention of immigrants.

By using software provided by the data-mining company Palantir, for instance, Immigration and Customs Enforcement agencies built profiles of immigrant children and their family members to track and detain undocumented immigrants, which then facilitated the family separation crisis. Tech companies’ apparent willingness to participate, however willfully, in this entrapment of undocumented immigrants, along with the lack of transparency around the practice, only exacerbates fears that civil and human rights are being violated as companies profit off the commodification of data.

In addition, some corporations have collected personal information without clearly disclosing what’s being collected and why—sometimes repurposing data for other uses without consent and building tools that make segregation worse. Consider how, while Facebook originally collected users’ phone numbers as a security protocol against unauthorized logins, it also used that information to deliver targeted advertising. Numerous studies have already established that Facebook’s ad delivery algorithm discriminates based on race and gender. The latest such study, published last month by Northeastern University, the University of Southern California, and Upturn, finds that this discrimination persists in job listings and housing ad delivery—even when advertisers don’t opt to target specific demographics and are trying to reach a broad audience.

Experts have long known that data practices can lead to discriminatory outcomes, but now there’s a growing body of evidence as further proof. So how can government and society begin to hold corporations accountable for these harmful consequences?

One possibility is legislation. Free Press and the Lawyers’ Committee for Civil Rights Under Law recently published model legislation outlining how Congress can ensure that personal data isn’t used to discriminate against protected classes for things like employment, housing, and education. They also propose classifying online businesses as public accomodations, which would make it unlawful for them to discriminate against marginalized communities and restrict access. The model legislation would also give the Federal Trade Commission (FTC), the primary government agency responsible for regulating privacy, the ability to enforce the law. Senator Ed Markey (D-Mass.) has introduced a similar bill.

Another possibility is empowering the FTC to protect privacy and civil rights through broad rulemaking and other changes in the agency’s authority. Public Knowledge has advocated for these sorts of changes, which could help protect marginalized communities from these now-extensively documented civil rights harms. In general, Congress needs to ensure that the agency has the right resources and experts to understand and address these issues.

Ultimately, though, the debate on privacy ought to center perspectives from marginalized communities that are disproportionately affected, and the tech policy community more broadly needs to reflect the diversity of the United States.

Alisa Valentin, a communications justice fellow at Public Knowledge and another panelist at the event, has written about #TechPolicySoWhite to underscore the importance of diversifying the tech policy space. Affected people—especially black people—ought to be included in discussions about these issues, she said, particularly because stakeholders define “privacy” differently, depending on the community and cultural background. (Notably, black and brown communities have already acknowledged the impact of data on their lives through research like “Our Data Bodies” and “Before the Bullet Hits the Body—Dismantling Predictive Policing in Los Angeles,” the latter by the Stop LAPD Spying Coalition.)

As Ochillo urged at the event, the privacy debate can’t exclude these critical voices. “You have to be persistent and clear about elevating your communities’ message in language that actually, they understand,” she said. “Because there are lots of different stakeholders in this debate who might all agree on where we’re going, but disagree on how we’re going to get there. What are you doing to get your constituents’ message into those core storerooms?”