Two weeks ago, right before the BlackHat and DefCon conferences, OTI released a paper on the ecosystem surrounding software vulnerabilities. The timeliness of this discussion was reinforced at BlackHat when, in line with one of our key policy recommendations, Apple announced that it is launching the most lucrative corporate Bug Bounty program yet. By offering up to $200,000 for vulnerabilities in their secure boot firmware components – the security of which concerns the FBI and may affect its ability to access content on iPhones – Apple is joining its fellow major tech companies in promising big payouts to security researchers who help make their products safer.
While we’re happy to see more companies release bug bounty programs, there are other recommendations from our recent report that would also have a strong impact on securing the vulnerabilities process. Policymakers have a number of opportunities to influence the flow of vulnerabilities and thereby make the digital ecosystem much safer for all of us, aligning incentives to ensure that more researchers are sharing the vulnerabilities they find with the people who can fix them, rather than selling them to those who want to exploit them. Here are our top five:
1. The U.S. government should minimize its participation in the zero-day market.
The ever-expanding market for previously undiscovered vulnerabilities is perhaps the single largest disincentive for researchers to disclose to technology companies. The difference between basically working for free (or a couple of nice tee-shirts) and tens, or even hundreds, of thousands of dollars is enough to make someone think twice about handing over that bug information to the parties who can fix the bug. The U.S. government is one of the largest buyers—indeed, probably the single largest buyer—in that market, and is in a unique position to disrupt it by refusing to participate. By relying on and growing its own technical expertise, at the NSA and other agencies, to discover vulnerabilities and develop exploits itself rather than fostering a dangerous gray market in vulnerabilities that ultimately makes us all less safe, the government can help undermine the power of other questionably buyers and direct more bug disclosures to people who can fix them.
We recommend that U.S. policymakers—and the U.S. Congress in particular—establish clear policies for when (if at all) the government buys vulnerabilities from third parties, with a goal of reducing or even eliminating our reliance on and support for the zero-day market.
2. The U.S. government should establish strong, clear procedures for government disclosure of the vulnerabilities it buys or discovers.
Whether it buys them or discovers them itself, it is good policy for the U.S. government to ensure that these vulnerabilities that put users and companies at risk are disclosed and patched as soon as possible. In the spring of 2014, the White House announced it was “re-invigorat[ing]” an interagency process first established 2010 to decide when the government should disclose vulnerabilities. The so-called “vulnerability equities process” (VEP) is intended to weigh the costs and benefits of holding on to a vulnerability for offensive or investigative use versus disclosing it so that it can be patched. The White House claims that the vast majority of vulnerabilities that go through the process end up being disclosed, but many questions remain about whether all vulnerabilities are actually reviewed, how many vulnerabilities have actually been disclosed and how many have been withheld for how long, which agencies meaningfully participate in the process, how exactly those decisions are made, and who makes them.
This process must be made more transparent so that Congress and the public can trust that the skewed incentives of the intelligence and law enforcement communities do not undermine cybersecurity by keeping too many vulnerabilities secret.
3. Congress should establish clear rules of the road for government hacking in order to protect cybersecurity in addition to civil liberties.
Government use of vulnerabilities to surreptitiously and remotely hack into computers as part of criminal investigations is a growing practice, so much so that the Justice Department has sought updates to the federal rule concerning search warrants—Federal Rule of Criminal Procedure 41—to place the practice on firmer legal ground. Yet for an investigative technique that has been common for at least fifteen years, practically nothing is known about how often law enforcement engages in such “network investigative techniques” or “remote access searches” as they are euphemistically called, or how they do it; indeed, law enforcement agencies have recently been fighting in court to avoid having to disclose details about how they have been breaking into suspects’ computers. And it’s not just the public who’s left in the dark: courts themselves, including the courts that routinely sign off on secret warrants to authorize such hacking, don’t seem to understand what they are authorizing. The government’s vague, unclear, or misleading language in the warrant applications surely doesn’t help.
The status quo needs to change. Considering that government hacking may result in a less secure digital environment—whether by perpetuating old vulnerabilities that the government chooses to exploit rather than disclose, or by unintentionally damaging systems or creating new vulnerabilities—it’s time for Congress step in.
4. Government and industry should support bug bounty programs as an alternative to the zero-day market and investigate other innovative ways to foster the disclosure and prompt patching of vulnerabilities.
Every company that produces software should have a clear process for outside researchers to disclose vulnerabilities – and if they’re smart they will also offer Vulnerability Reward Programs (VRPs) or “bug bounty” programs to reward the people who discover those vulnerabilities. Whether the reward comes in the form of “thanks, t-shirts, or [simply] cold hard cash,” providing a clear path for vulns to be disclosed and for disclosures to be rewarded is a must if companies want to provide a meaningful alternative to selling vulns on the open market. Though unlikely to ever be able to compete dollar-for-dollar with governments and organized criminals, these programs provide an outlet for researchers who have ethical or legal qualms with simply selling to the highest bidder, want to build a legitimate reputation as a security expert, or just want to help improve digital security.
We encourage companies to get even more creative about the financial and non-financial incentives they can offer to bug discoverers, and for the government to promote the further adoption of these programs.
5. Congress should reform computer crime and copyright laws, and law enforcement agencies should modify their application of such laws, to reduce the legal chill on legitimate security research.
Improving cybersecurity entails supporting and encouraging security research. Policymakers looking to move the cybersecurity needle could start by reforming a number of laws that subject independent security researchers to legal threat, as a broad coalition of academic researchers and civil society experts have urged. The Computer Fraud and Abuse Act (CFAA), the Electronic Communications Privacy Act (ECPA), the Digital Millennium Copyright Act (DMCA), and the international export control agreement known as the Wassenaar Arrangement put researchers at risk of legal repercussions from companies or even serious criminal charges for actions that would help make products more secure. And although potential changes to the CFAA and the DMCA have been proposed to help reduce the chill on security researchers, many advocates are still concerned that the new exemptions are still drawn too narrowly and that important security research may still be stifled under the new rules.
We encourage Congress to codify appropriate researcher exemptions in these laws, and use its oversight authority to ensure that renegotiation and implementation of the Wassenaar Arrangement leads to a final rule that adequately protects legitimate cybersecurity-related activities.