What Biosecurity and Cybersecurity Research Have in Common

Weekly Article
Flickr Creative Commons
March 23, 2017

Biosecurity and cybersecurity research share an unusual predicament: Efforts to predict and defend against emerging threats often expose and create vulnerabilities. For example, scientists must first learn how to isolate and grow a pathogen before they can develop a new vaccine. Similarly, researchers must first learn how to break into a computer system in order to defend it.

In the wrong hands, both types of knowledge can be used to develop a weapon instead of a vaccine or a patch. The genetic tools and exploit software that enable these activities are becoming easier to use and to acquire, prompting security experts to ask one question with growing urgency: How can we protect against misuse without limiting discovery and innovation?

Both fields have grappled with this dual-use dilemma independently for decades. In 2005, when scientists reconstructed the 1918 flu virus that killed 50 million people worldwide, did they advance the science of prevention, or did they introduce new risks? When scientists test computer systems for vulnerabilities, do they promote legitimate software development, debugging, and security auditing? Or do they enable malicious computation as well?

Government efforts to control this type of “dual-use knowledge” date back to the Cold War (and earlier) with mixed results. By working together, cybersecurity and biosecurity experts have an opportunity to identify new approaches and to avoid repeating past mistakes.

Government regulators do not want to squelch innovation, but they work with blunt instruments. To date, they have focused on the tangible products of sensitive research such as pathogens, publications, and malicious code. Regulations that rely on static lists struggle to keep pace with fields as fast-moving as bio- and cybersecurity. Worse, they can damage research productivity without offering meaningful security.

For example, in 1997, “select agent” regulations—so called because they focus on creating restrictions around particular pathogens, like anthrax and plague—were put in place after a white supremacist fraudulently obtained vials of Y. pestis (the bacterium that causes plague) from the American Type Culture Collection. It certainly may seem like a reasonable policy. But many scientists soon stopped working with these pathogens after concluding that the professional risks and regulatory burdens were too cumbersome. As a result, legitimate research suffered.

Meanwhile, a determined bioterrorist can still steal pathogens from labs, isolate them from nature, or synthesize them. Select-agent regulations provide a basis for prosecution if pathogens are obtained illegally, but, as the anthrax letter attacks demonstrated, it is all too easy to evade detection. These kinds of dragnet regulations are unlikely to catch a skilled opponent but certain to hinder legitimate research.

Intellectual property and cybersecurity legislation—namely the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act—has similarly stifled legitimate scientific and commercial activities and delayed defensive applications. In one well-known example, fear of prosecution under DMCA deterred a Princeton graduate student from reporting a problem that he discovered: Unbeknownst to users, Sony BMG music CDs were installing spyware on their laptops. Several weeks elapsed before another researcher (who did not know about the potential legal repercussions under DMCA) reported this problem. Meanwhile, hundreds of thousands of computers continued to run Sony’s spyware along with a rootkit that made these systems more vulnerable to other viruses.

International agreements stumble over this duality as well. The Wassenaar Arrangement restricts “intrusion software” exports, which U.S. regulators have defined as software modifications that permit “externally provided instructions” to run. The idea is to prevent companies from exporting surveillance software to authoritarian regimes that could use these tools to abridge civil liberties and abuse human rights. But such a broad definition also prevents the export of legitimate software products that can enhance security, such as debuggers and performance-testing tools.

More recently, biosecurity experts have begun to scrutinize not just pathogens and publications but also the activities and techniques that create them, identifying seven research categories that demand closer scrutiny. These include a subset of experiments that increase pathogens’ stability, transmissibility, or host range (the animals that could harbor the disease). This type of research gained notoriety in 2011 when two labs engineered a highly pathogenic form of bird flu to transmit more easily between mammals. These efforts, while still a work in progress, signal a way for regulators to begin to focus less on pathogens and code and more on the risks and intent of research projects themselves.

For all of their similarities, key differences between biosecurity and cybersecurity risks and timelines will dictate varied regulatory strategies. For example, zero-day exploits—that is, holes in a system unknown to the software creator—can be patched in a matter of months, whereas new drugs and vaccines can take decades to develop. Digital vulnerabilities have a shorter half-life than biological threats. Measures to promote disclosures and crowd-sourced problem-solving will therefore have a larger immediate impact on cybersecurity.

On the other hand, reporting “vulnerabilities” in the bio realm poses a greater security risk when countermeasures are not and may never be available. Unless drug and vaccine development times improve dramatically (i.e., from decades to weeks), the rationale for restricting sensitive research is somewhat stronger because the risk can outweigh the benefit.

Moreover, some restrictions are more feasible in the life sciences. Researchers require expensive labs with institutional overhead, federal grants, and a publication stream. As a result, governments, research organizations, and publishers have many opportunities to intervene. For example, after scientists announced the results of the 2011 bird flu experiments, the White House and the National Institutes of Health placed a stop order on existing research until the costs and benefits of these experiments could be more fully evaluated. It is hard to imagine how one might implement a similar measure in the hacker community.

Still, both fields face the same basic problem: There are no true “choke points” in either field. The U.S. government is not the only source of research funds and, thanks in large part to the internet itself, it is increasingly difficult to restrict sensitive information. As the funding, tools, and skills for security research become globally distributed, dual-use dilemmas will become more pronounced, and the regulatory challenges facing both fields will share more similarities than differences.

Looking ahead, biosecurity and cybersecurity regulations will need to adopt a more liberal governance regime that places less emphasis on static lists of controlled items. This choice acknowledges and even embraces the limits of “hard” rules, such as select agent rules and export control lists. To be sure, regulators should erect high walls around a few narrowly-defined, high-risk activities, such as research that enhances the pathogenicity of viruses. But these boundaries should be drawn with precision and restraint. Otherwise, regulators must prioritize measures that maintain vibrant research communities. These include actions to promote information-sharing and establish responsible norms. These methods must be developed with the scientists themselves and tailored to specific technologies and research methods.

Traditional policy tools—legislation, treaties, federal and international security standards—continue to provide opportunities for softer rule-making and norm development. Increasingly, however, new information-sharing platforms can facilitate concrete agreements that build the norms, standards, trust, and transparency necessary for security research to flourish. Examples include the Global Initiative on Sharing All Influenza Data, which hosts a database to promote genetic data-sharing for flu viruses around the world, and sector-specific Information Sharing and Analysis Organizations, which seek to provide timely information to mitigate cyber vulnerabilities.

Regulators must work with biosecurity and cybersecurity experts to preserve productive research environments so that they may defend us in return. Our interconnected world of humans and computers provides fertile ground for viruses of both sorts. As these connections grow in density, viruses become even harder to contain. Research communities that can rapidly detect and respond to these emerging threats will be our greatest defense in the future.

This article is part of Future Tense, a collaboration among Arizona State UniversityNew America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.