April 26, 2018
These are dizzying times for data breaches. This week, the Dubai-based ride-hailing app Careem revealed a breach through which attackers stole some 14 million customers’ names, email addresses, phone numbers, and trip information. And, last week, the Ikea-owned task outsourcing app TaskRabbit announced that it’s currently investigating a breach that may have compromised “certain personally-identifiable [user] information,” though it hasn’t yet disclosed what kind of information that was, or how many users were likely impacted.
The entire past year, in fact, has been marked by massive data breaches. There was the one at the U.S. consumer credit-reporting firm Equifax, which occurred at some point between May and July 2017 but wasn’t publicly disclosed by the company until September, and which may have impacted 148 million Americans. In January 2018, the Tribune, an Indian newspaper, reported that journalists were able to gain access to the Indian government’s controversial biometric database, Aadhaar, after paying about USD$8 to an anonymous individual selling access online. New information also surfaced about previous data breaches, including a 2016 breach affecting 57 million Uber users that the company attempted to cover up, and a record-breaking 2013 Yahoo breach that affected around 3 billion users.
The conversations that have happened as a result of these breaches are crucial. Above all, they’ve gotten people to think about the data companies collect and share. However, these conversations are also incomplete. In particular, we’re not discussing how these companies secure our data, and what they’d do if a breach occurred. Tech companies collect a vast amount of information from and on us, including private conversations with friends and family, web browsing histories, and location data logging our daily whereabouts. As a result, they have a responsibility to safeguard our data.
There are many ways companies can improve their security, and do right by their users. For one thing, limiting the number of employees who have access to personal user information, and the scope of user information they have access to, can help lower the risk of an employee exposing user information, whether done with malicious intent or by human error. So can undergoing regular security audits, both internally and also conducted by an external third party, which can enable companies to identify and patch security vulnerabilities in their systems and ensure that they’re following industry best practices. Moreover, providing and publicizing a security vulnerability reporting mechanism can allow security researchers to quickly and efficiently report vulnerabilities a company may not yet be aware of.
These are all basic steps that companies should take—but whether or not that’s the case in practice remains, regrettably, unclear. And that’s because companies tell users very little about what they’re doing to keep our information secure. Indeed, the 2018 Ranking Digital Rights Corporate Accountability Index, published yesterday, found that the world’s leading internet, mobile, and telecommunications companies fail to disclose sufficient information about their security, such as basic information about internal security oversight and what plans are in place to address security vulnerabilities.
Why is this such a problem? It’s 2018—wouldn’t it be reasonable to assume that major tech companies are implementing basic security practices widely accepted as industry standards?
Unfortunately, that’s not a fair assumption to make. To see why, look at an example from earlier this month, when a Twitter user reached out to T-Mobile Austria to ask if it stores part of users’ passwords in plaintext instead of using a common industry practice: fully converting passwords into “hashes,” or seemingly random-looking strings of characters, before they’re stored. The way these hashes are created means that a given password will always produce the same hash—so the next time you enter your password on a website, it will be hashed and compared to the hashed version of your password already stored in the system. This allows the website to verify that you’ve entered the correct password without having to store the password itself. By contrast, even just storing part of a password in plaintext makes it easier for a hacker to crack the rest of it.
When another Twitter user chimed in, asking what would happen if T-Mobile Austria’s system were breached and all passwords published, a customer-service representative responded, “What if this doesn’t happen because our security is amazingly good?” Things only got worse from there. When the same user asked what would happen if an employee were to access the password database, the representative retorted, “Excuse me? Do you have any idea how telecommunication companies work? Do you know anything about our systems? But I’m glad you have the time to share your view with us.”
There’s of course an obvious lesson here on how not to handle questions about security. However, this exchange also illustrates just how difficult it is for us, as users, to know how—or even if—companies are protecting our data. Twitter users not only began piling on top of T-Mobile Austria, but they also started tweeting at other Deutsche Telekom subsidiaries, demanding to know if they hashed their users’ passwords. As for T-Mobile Austria, following the outcry, it said that it would begin hashing passwords “as quickly as possible.” Although unquestionably a positive development, this is a basic security measure the company should’ve already been implementing.
While taking security precautions can lower a company’s risk of a data breach, no system is completely impervious to a breach, no matter how “amazingly good” a company’s security is. However, despite massive data breaches continuing to make headlines, most of these companies don’t actually disclose anything about how they’ll respond in the event of a breach. Of the 22 companies evaluated in the 2018 Index, for instance, only four—Apple, AT&T, Telefónica, and Vodafone—disclosed any information about how they would respond to a data breach. Given the reported rise in both the number of data breaches and the number of individuals affected, it’s both surprising and concerning that so many companies aren’t telling users how they’d respond to, and mitigate the impact of, a data breach, should one occur.
On a basic level, companies ought to be more proactive about communicating to users how their data is handled and secured. Being more straightforward about security practices shouldn’t come in the form of damage control after a company has a scandal to ameliorate or a breach to patch up. If companies want to demonstrate to policymakers, investors, and users that they value privacy, they need to show us that they take security seriously.