Table of Contents
Personal Introduction by Robert Lord
The Big Idea
Earlier in my career, I attended medical school (though I never graduated, much to my parents’ chagrin), and while doing so was privileged to work at a clinic in Baltimore that served primarily HIV-positive patients. It was a formative experience for me. I found that patients would often risk their own health outcomes to avoid having their information shared and their diagnosis exposed to the community.1 Some would miss appointments or skip doses of life-saving medication to avoid colleagues learning of their illness. A few patients were so concerned about the privacy of their diagnosis that they entirely ceased care, potentially costing them their lives. HIV-positive patients are far from the only individuals with sensitive diagnoses. Most people, over the course of their lives, will accumulate information in their electronic medical records (EMRs)2 that they would rather not share with the world.
The core premise of this paper is that poor cybersecurity and privacy practices now represent a major threat to patient safety, and as such, deserve much greater attention from physicians, senior leaders in the healthcare sector, and policymakers. This is not a new insight in the field of healthcare cybersecurity, having been put forward by many leaders in the field, but it has a special resonance for me.3
During medical school, I spent a significant amount of my time working in the field of patient safety research, particularly in Intensive Care Units. The work we did there has remarkable parallels to the work my colleagues and I now do in cybersecurity and privacy. Through my work at Johns Hopkins,4 in one of the finest patient safety research groups in the country, I learned that three key factors define the success of a patient safety intervention: technology, workforce, and culture. Those are exactly the components of a successful cybersecurity strategy.
Technology, broadly speaking, has been a powerful force in patient safety, but it’s not always the most advanced new artificial intelligence system that wins the day. Indeed, the work of checklists with the Stop BSI campaign, designed to combat bloodstream infections, was the technology we needed back in the early 2000s.5 However, as this work has continued to advance, so has the work of AI and machine learning in predicting and preventing patient safety events, such as preventable septic shock or errors in emergency patient triage.6 The need for appropriate technological innovation is no different in healthcare cybersecurity, where we need both the basics of good frameworks, as well as the augmentation and assistance that comes with transformative technology.
Culture change is also a powerful transformational tool, and perhaps the most critical of all interventions in patient safety. Whether creating safe harbors for reporting medical errors so clinicians can learn from their mistakes and others’, or developing more robust accountability for handwashing, it is the challenge of culture change that defines both the greatest opportunities and challenges in healthcare. Similarly in cybersecurity, we have a great need to change viewpoints, accountability, and entrenched practices, and we propose in this paper some pathways to get there.
Changes in our workforce are also a powerful driver for a sustainable future of improved patient safety. An awareness of patient safety is now embedded in medical curricula across the country, including the one that I was privileged to attend years ago. Students have opportunities to engage early, and training in best practices is both freely available and valued by academia. With each new generation of clinicians, more and more awareness builds of the importance of mitigating preventable errors, and our role in tackling these errors. So we must work to build both awareness in our workforce, as well as create the pipeline and training that keeps our healthcare cybersecurity workforce strong and at the cutting-edge of the challenges it will face.
Thus, to me, there is no stretch of the imagination or clever rhetorical flourish necessary to think of good cybersecurity and privacy as a matter of patient safety—it is, in every way, an essential component of reducing the preventable harms that can be predicted and prevented, if we have the will to do so.
Despite a near-consensus among cybersecurity professionals that the healthcare sector faces a cybersecurity crisis, too often we assume that innovations in patient care will be unambiguously beneficial for a patient. There is increasing evidence7 that these advances often come with cybersecurity risks that potentially expose patients to significant harm. Failing to mitigate medical cybersecurity vulnerabilities places patients and hospitals at risk of incurring real financial, reputational, and physical harm.
A core principle of medical ethics spanning as far back as the ancient Greek physician Hippocrates states primum non nocere, “first, do no harm.”8 This principle centers on a doctor’s obligation to prevent harm from befalling a patient. It requires a careful balance between the potential benefits and risks of a treatment. The benefits of these emerging healthcare technologies must be balanced with the attendant cybersecurity risks to ensure that a new, dynamic version of the “Do No Harm” principle can be upheld. In my experience, many parts of the United States healthcare system are at serious risk of failing to adhere to that principle. Hence the “Do No Harm 2.0” title of our project.
Healthcare cybersecurity has been a passion of mine for many years, first as a medical student and then as an entrepreneur. But I also realized some time ago that change at scale would require policy change. This report is an attempt to affect that, and I am grateful to have the opportunity to do that with my colleagues at New America, and particularly my fantastic coauthor Dillon Roseen. Neither of us believe that the ideas in the paper are revolutionary, but we do think that together they represent a practical and much needed path to a better future.
A Note on our Approach
Our goal is to set out a pathway to action, not to cut across good work that is already being done in this space. As such, we stand on the shoulders of many groups and individuals who have been thinking about these challenges for years—we hope to consolidate many great ideas in healthcare security and privacy in order to operationalize them.
In pursuit of that goal we decided to pursue a twin track approach—both to emphasise the big, bold idea that it is high time that cybersecurity in healthcare should be more widely treated as a patient safety issues. But we are also mindful of the fact that visionary aspirations alone will not get us where we need to go. And so through many interviews and much research identified and refined a set of 17 practical recommendations that, if implemented, would go a long way to realising that vision.
Indeed, the goal of this project is to set out a series of specific policy measures that bring cybersecurity and privacy in the healthcare sector to where they need to be five years from now. While some of these ideas are new, many of them build upon the hard work of experts and organizations who have been tackling these problems for decades. This project will ultimately challenge the field to think critically about where healthcare is headed over the next five years and how today’s policy solutions can mitigate future challenges. Do No Harm 2.0 offers a number of detailed policy recommendations that together will serve to secure the health systems of tomorrow.
Our recommendations are based on a review of existing policy guidance, including from the June 2017 Health Care Industry Cybersecurity Task Force Report; governmental and industry organizations like the National Institute of Standards & Technology (NIST), the United States Department of Health and Human Services (HHS) (including the Office for Civil Rights (OCR), the Healthcare Industry Cybersecurity (HCIC) Task Force and more), the Healthcare Information Sharing and Analysis Center (H-ISAC), Healthcare Information Trust Alliance (HITRUST), the College of Healthcare Information Management Executives (CHIME), the American Hospital Association (AHA), the American Medical Association (AMA), the Healthcare Information and Management Systems Society (HIMSS), the United State Food & Drug Administration (FDA), and many others; background research, including desk research, input from frontline practitioners, independent privacy and security experts, and executive leaders in healthcare; and consulting New America’s network of experts in healthcare, cybersecurity, artificial intelligence, and information technology systems. Many past reports have taken a broad look at the complex, interconnected issues faced by the healthcare sector, so in the spirit of “Do No Harm,” this paper will focus specifically on the risks faced by providers and health systems.
This work comes on the heels of both the FDA’s guidance on medical device cybersecurity,9 the Cybersecurity Task Force’s Health Industry Cybersecurity Practices,10 and the Healthcare Sector Coordinating Council’s Joint Security Plan,11 three powerful and practical documents that provide helpful insight into hospitals’ on-the-ground challenges and how to tackle them. We do not seek to replicate or replace any of this work—rather, we focus on 1) specific policy change recommendations and 2) the next five year time horizon for evolving our field.
Naturally, this approach has affected the way we have written the report. First, we have consciously targeted our recommendations at the healthcare community, and healthcare policymaking experts. So, while we very much believe that it can and should serve as a guide for newcomers to the area, we make no apologies for the fact that the analysis and argumentation assume at least a degree of understanding of the healthcare industry.
Second, this approach also means that we have deliberately made our recommendations specific to the sector. That is not intended to suggest that all of the challenges facing the sector are unique to healthcare. In fact, use of generally accepted cybersecurity best practices, such as adoption of the National Institute of Standards and Technology Cybersecurity Framework12 (designed for use across critical industry sectors) should be a given for all organizations in the sector. However, we have consciously focused on healthcare specific recommendations, because that is what we believe is currently missing from the conversation.
Structure
This project uses a three pillar framework to address the privacy and security needs of the healthcare sector. The three pillars, which respectively correspond to Chapters Three, Four, and Five of this report, are:
- Culture. Crystallizing cultural norms in healthcare to ensure trust between patient and provider
- Technology. Identifying technological opportunities and challenges related to cybersecurity and privacy facing the healthcare sector
- Workforce. Building a skilled healthcare cybersecurity labor pool for the future
A Note on Conflict of Interest
Finally, I must acknowledge that as a cofounder of a healthcare compliance analytics company I have a vested interest in promoting better healthcare in cybersecurity, as some of our products solve problems in this realm. I do not believe that that undermines my ability to argue for the importance of better cybersecurity, or that it should be treated as a patient safety issue. Indeed, that was part of my motivation is leaving medical school to establish my company. However, I am very conscious of the need to avoid any conflict of interest, and my co-author Dillon Roseen and other colleagues at New America have been scrupulous in ensuring that none exists.
Citations
- Robert Lord, John Cmar, “HIV patients forced to choose between medical care and their privacy,” Becker’s Hospital Review, July 24, 2018.
- Beyond an individual’s personal electronic health record, EHRs are also critical to understanding medical devices, software, and other healthcare infrastructures because they are largely interconnected by an underlying system of EHRs.
- Health Care Industry Cybersecurity Task Force, Report on Improving Cybersecurity in the Healthcare Industry, June 2017.
- Dr. Dale Needham’s Outcomes After Critical Illness and Surgery (OACIS) Research Group
- Pronovost, P., Needham, D., Berenholtz, S., Sinopoli, D., Chu, H., Cosgrove, S., . . . Goeschel, C. (2006). An intervention to decrease catheter-related bloodstream infections in the ICU. The New England Journal of Medicine, 355(26), 2725– 2732.
- Challen R, Denny J, Pitt M, et alArtificial intelligence, bias and clinical safetyBMJ Quality & Safety 2019;28:231-237, source
- For example see: The UK Comptroller & Auditer General, ‘Investigation: WannaCry cyber attack and the NHS’, UK National Audit Office, April 25, 2018, source
- Robert H. Shmerling, “First, Do No Harm,” Harvard Health Publishing, October 13, 2015.
- US Food and Drug Administration, Content of Premarket Submission for Management of Cybersecurity in Medical Devices: Guidance for Industry and Food and Drug Administration (Washington DC, FDA, Oct 2018) Staffsource
- US Department of Health and Human Services, Public Health Emergency: Health Industry Cybersecurity Practices: Managing Threats and Protecting Patients (Washington DC, HHS, Dec 2018) source
- Healthcare and Public Health Sector Coordinating Council, Medical Device and Health IT Joint Security Plan (Healthcare and Public Health Sector Coordinating Council, Jan 2019) source
- National Institute of Standards and Technology, Cybersecurity Framework Version 1.1 (Gaithersburg MD, NIST, April 2018) source