How Far Should Congress Go to Protect Privacy?

Weekly Article
mark reinstein / Shutterstock.com
Sept. 26, 2019

If you’re in California, you may have recently come across a spate of ominous Facebook and Twitter ads.

“The FREE websites and apps you use every day could start costing you,” warns one such harbinger of doom, while another insists that “[u]sing the internet shouldn’t hurt your wallet.”

Funded by a tech trade association, the ads refer to the state’s landmark consumer privacy legislation. Set to take effect next year, the California Consumer Privacy Act of 2018 (CCPA) grants individuals some degree of control over their privacy by allowing them to opt out of the sale of their data, among other things. While the ads seem to forecast the end of “free” services online, CCPA requires no such thing: Companies might choose to change their business models in response to the legislation, but “free” services will very likely continue to be available—and with the added consumer benefit of increased privacy protections.

Still, misleading characterizations aside, the ads point to nationwide concerns about how privacy protections could—for better or worse—transform the business model underpinning many of our online services.

Since the 1990s, online companies have offered services that, while ostensibly free, require users to relinquish their personal data and expose themselves to privacy risks—in other words, to pay with their privacy. Take, for instance, Google’s search engine product. While users don’t pay monetary fees for Google Search, the company collects detailed data on what they type into the search field, what they click before and after searching for different terms, and how much time they spend visiting pages clicked on. Google monetizes this data by using it to predict user behavior and identify which ads individuals are most likely to click on.

Pioneered by Google, this behavioral advertising strategy soon caught on across the internet and is now employed by news publications, social networks, and other companies. And while the model might lead to increased efficiency and profit for companies, it also subjects users to data breaches and other widespread privacy intrusions. New America’s Open Technology Institute explores these tensions—and the potential impacts of privacy legislation on online business models—in a new report.

Behavioral advertising can harm users in a number of ways. First, it violates consumer expectations for privacy through tactics that users find “creepy” or “too aggressive”—particularly when they feel like they’re being followed around by ads for certain products. Second, any collected data is a potential hacking target vulnerable to breaches. Third, companies can further upset user expectations by wielding data—without users’ permission—for purposes beyond what it was originally collected for.

New legislation could rein in these harms by limiting or outright banning behavioral advertising. But such a provision could have broad downstream effects on individual privacy, innovation, compliance costs, socioeconomic equality, and more—all things policymakers should take into account when considering comprehensive federal privacy legislation.

Legislation that restricts behavioral advertising could spur business models that better protect individual privacy—including those rooted in traditional contextual advertising. Just as TV, print, and radio ads are “targeted” to their anticipated audience, online contextual ads hinge on a website’s content rather than detailed behavioral profiles of individual users. By allowing sites to continue advertising without relying on data collection, contextual advertising presents fewer privacy risks than behavioral advertising. Many online businesses, such as search engine DuckDuckGo, already successfully utilize contextual ads while maintaining profitability.

At the same time, legislation could pose challenges for businesses that lack adequate resources and the relevant expertise to comply with rigorous requirements. If legislation imposes burdensome demands, new companies might have a harder time entering the market. This isn’t to say small businesses should be exempt from privacy protections—they can intrude on privacy just as much as large companies if their data and privacy practices prove inadequate. Legislators should consider both the risks and needs of small companies when drafting rules to hold businesses accountable for user privacy.

Furthermore, legislation could block companies from charging users a higher price to protect their privacy, which could in turn prevent companies from recouping lost ad revenue. It’s not clear whether this policy might yield a net gain or loss: Legislation that includes this kind of provision could unfairly exacerbate the digital divide, as low-income consumers might not be able to afford privacy protections. At the same time, prohibiting companies from charging a subscription fee when a user opts out of behavioral advertising compels them to find new ways of generating revenue.

Congress now has a critical, complex task ahead: protecting consumer privacy while encouraging competition and innovation online. There are no clear answers, but these issues should be top of mind for both users and businesses as momentum builds around federal privacy legislation.