Don't Get Carried Away With Cybersecurity

Weekly Article
Skyward Kick Productions / Shutterstock.com
Dec. 6, 2018

The news that the personal data of 500 million Marriott customers was stolen broke less than a week ago, so it’s a good moment to be trying to capitalize on it to build support for more aggressive security policies. In a week or two, the Marriott breach will be old news and that momentum will almost certainly have evaporated, but in the meantime, Sen. Ron Wyden has released a discussion draft of a new bill, the Consumer Data Protection Act, aimed at ramping up the penalties imposed on companies, like Marriott, that suffer these types of massive breaches.

The CDPA draft gets a lot of things right about cybersecurity breaches—it is absolutely true that there are insufficient penalties for failing to protect customer data, that the Federal Trade Commission is not able to impose significant fines on breached companies and does not have adequate resources to investigate every major breach, and that the lack of clear-cut minimum-security standards for organizations storing personal information makes it all the harder to sort out these liability issues. But while Wyden’s office does a good job articulating the problems surrounding breaches like Marriott’s, the proposed solutions are less promising.

The most eye-catching piece of the proposed draft is a provision that would allow for executives who knowingly sign off on incorrect or inaccurate annual certifications of their companies’ data-security practices to face prison sentences of up to 20 years. In a largely sensible bill, this is a wild overreaction—and one that in no way helps companies struggling to figure out how to do a better job protecting sensitive data.

Perhaps in some very particular cases, where executives get away with deliberately lying about data security in ways that actually lead to devastating financial or physical harm, it would make sense to consider imprisonment. But that is simply not the case for most data breaches.

The presumption of the jail-time penalty seems to be that one of the big problems in security today is that executives are constantly lying about how good their data security is and they are not sufficiently fearful of the consequences of breaches to invest resources in better security. No doubt that is true at some companies, but more often we see companies cluelessly make terrible decisions about security—as Marriott may have done if it did indeed store the private keys needed to decrypt sensitive customer data alongside the encrypted data itself, as reporting suggests. Some companies may be lying, but many more simply don’t know what they should be doing—a problem the bill also takes steps to try to rectify by requiring the FTC to clarify what security and privacy measures it expects from companies. (This is, in many ways, easier said than done—different security measures may be better suited to different companies, and allowing for enough flexibility to meet everyone’s needs may mean that the standards end up being too vague to allow for much enforcement anyway.)

Moreover, the notion that executives at most companies are not already concerned about data breaches seems ridiculous given how many top-level executives have lost their jobs in the aftermath of serious breaches at their firms. According to a study this year from Kaspersky Lab, 32 percent of data breaches in North America led to executives or managers losing their jobs at the targeted companies. After the Equifax breach last year, both the chief information officer and the chief security officer immediately retired. Yahoo’s general counsel resigned last year as well, after the company experienced a series of major breaches. Uber’s chief security officer was also fired last year, along with an in-house lawyer, following a breach at the company. Perhaps in some very particular cases, where executives get away with deliberately lying about data security in ways that actually lead to devastating financial or physical harm, it would make sense to consider imprisonment. But that is simply not the case for most data breaches.

Sending executives to prison is not the only new penalty proposed in the current CDPA draft. The bill would also allow the FTC to fine companies that suffer data breaches up to 4 percent of their annual revenue—an excessive maximum fine borrowed from the European Union’s General Data Protection Regulation. Like the prison penalties, this is a major overcorrection of the current state of affairs and way out of line with the amount of time and money we should expect—or hope—to see companies spending on security.

It is entirely reasonable to say that the FTC should be able to impose larger fines on breached companies that take into account the noneconomic harms those breaches impose on customers. This would almost certainly spur companies to invest more resources in data security, especially since many companies face no fines at all from the FTC for their breaches. In October, for instance, Uber agreed to a settlement with the FTC that included several changes to its company policies but no financial penalty for its breaches in 2014 and 2016 that affected 57 million Uber drivers and riders. But a maximum fine of 4 percent of yearly revenue is a wild overreaction—one that offers companies no realistic guidepost for how much they should be spending on security. They’re certainly not all going to spend that much, nor would we necessarily want them to, so if companies are trying to weigh the costs of additional security against the costs of a potential breach, these fines are not necessarily helpful in that regard.

Data security is important, and I would like to see organizations do a better job at it. I would also like there to be stronger incentives to invest in security and clearer guidance about how to do that well. The CDPA draft published in the aftermath of the Marriott breach ostensibly aims to do both those things, but it loses sight of the fact that security is not, and should not be, a company’s only priority. The maximum penalties laid out in the draft seem to indicate that a company cannot spend too much on security or be too afraid of data breaches—but that is nonsense.

If you’ve ever had to sit through an unhelpful cybersecurity training course or been forced to change a password every 90 days or been locked out of an email account for logging in from a new device, you know that it is entirely possible to spend too much money and waste too much time on cybersecurity without actually making anything more secure. It is entirely possible for additional security measures to prevent you from doing useful and worthwhile things because of the extra effort required. It is similarly possible for increased fear about breaches and their consequences to dissuade a firm from undertaking new projects that might involve a little more risk or data.

Wyden is right to be concerned that we haven’t yet struck the right balance between security and all the other competing priorities firms face, but he’s wrong to tip the scales so drastically in favor of security at the cost of all else. Figuring out that balance will require more measured, incremental steps of the sort outlined in the more levelheaded sections of the CDPA. These include providing clearer security guidelines for companies and more resources for policing and investigating security incidents. It’s great that Congress is worked up about cybersecurity in the aftermath of yet another massive data breach, and it’s important to hold onto that energy even after the Marriott breach fades from the news cycle, but it’s also important not to get too carried away.

This article originally appeared in Future Tense, a collaboration among Arizona State University, New America, and Slate.