Feb. 8, 2018
Thanks to Hollywood, artificial intelligence (AI), for many of us, now either seems like a distant fantasy or conjures up visions of evil robots taking over the world. The latest high-profile individual to succumb to this either-or vision is Alibaba founder and Executive Chairman Jack Ma. On a recent Davos panel, he said that AI is a threat to human beings because AI and robots are going to kill lots of jobs. But there’s one industry where that prediction simply isn’t true: the health care cybersecurity workforce.
What’s going on in health care? Well, for one, the field is facing a looming challenge. Even as the need for technological sophistication grows in the wake of ever-increasing privacy and security threats, the pool of qualified tech workers available to the industry is expected to shrink. As Ma made clear, a common critique of AI is that it puts people out of work, replacing humans with machines.
But this argument doesn’t line up with the reality for the health care information security workforce, given that, frankly, there aren’t enough security professionals to meet demand. It doesn’t have to be this way, though. Other industries point to how AI-based solutions can turn the issues beleaguering health care cybersecurity into opportunities that enhance both patient trust and an organization’s bottom line.
A look at the numbers shows the punishing need for health care security professionals. A 2016 Institute for Healthcare Technology survey found that 72 percent of health care organizations in Georgia had more than 50 job openings. Healthcare IT News reported that “demand for skilled IT professionals is expected to continue to grow, with three areas most in demand: electronic medical record systems, cybersecurity, and system integration.” Reinforcing this concern, CIO magazine wrote that “health care is continuing to experience a shortage of qualified health IT staff that, in the view of some observers, is growing worse,” with one-third of managers reporting they had to postpone or scale back an IT project because of inadequate staffing. “Tens of thousands of jobs are going to be needed, and we don’t have the people for it,” said Frank Myeroff, president of Direct Consulting Associates, a health IT staffing firm.
Further complicating the workforce shortage? Information security analysts can be expensive; such workers commanded a median annual wage of $92,000. Health systems, many of them resource-challenged nonprofits, must compete for these skilled workers against the deep pockets of corporate giants that seek them out in increasing numbers. And even as squeezed hospitals attempt to cut costs by replacing highly paid consultants with cheaper in-house staff, they continue to invest heavily in technology. Clearly, the growing need for professional information security analysis represents a serious challenge for many health systems, a problem many health care executives are likely to understand all too well.
And yet, much of the actual work of information security analysis in a health care environment, such as meticulously checking access logs to prevent and respond to unauthorized entry, can be automated—if you have the proper tools. Manually auditing access data isn’t an efficient or effective use of time for highly trained IT staff, and programs developed for this purpose a decade ago aren’t much better.
As someone who runs an AI and analytics company focused on protecting patient privacy via detection of health care data breaches, I’ve seen the opportunities to be found in AI firsthand: empowering current IT staff with technological tools that enable them to move away from mundane tasks to focus on projects that use their skills to greater advantage; eliminating false positives and boosting response times in auditing electronic health record access logs, minimizing the risk that privacy breaches expose an institution to expensive lawsuits and reputational damage; reducing the likelihood that patients are harmed by privacy intrusions; and bolstering cost efficiencies that allow a health system to allocate more funds to patient care and other needs.
To an extent, these AI solutions mirror solutions in other industries where AI is already improving people’s lives. In education, for instance, intelligent tutoring systems provide personalized tutoring and real-time feedback for post-secondary students, avoiding the need for incoming college freshmen to take remedial courses. In the insurance field, meanwhile, machine learning is deployed to automate, and thus speed up, much of the claims-handling process. By leveraging AI and handling massive amounts of data in a short time, insurers can streamline much of the handling process—for instance by fast-tracking certain claims—to reduce the overall processing time and, in turn, the handling of costs while enhancing a customer’s experience.
In the face of such benefits, even AI skeptics like Jack Ma can’t discount AI’s role in supporting workers. Indeed, at the same Davos panel, Ma correctly noted that “AI should support human beings. Technology should always do something that enables people, not disable people.” That’s exactly what I’ve seen within health care cybersecurity.
With such a clearly defined problem and a technological solution that stands ready to make health care security professionals more efficient, why haven’t AI-based solutions already been deployed? Many barriers, some the result of near-sighted policies and others the result of conservative cultural propensities, slow the adoption of AI within health care.
Yet it’s possible to implement policies that ensure a more rapid and secure adoption of AI-based solutions in health care.
For one, the federal government could evaluate incentive options, such as tax incentives, to encourage health care providers to migrate to new technology platforms that promote better cyber hygiene from the outset, with an emphasis on comprehensive review and AI augmentation. Incentives were used in the 2011 Medicare and Medicaid Electronic Health Record (EHR) Incentive Programs to transition the health care industry from a largely paper-based system to one that, today, almost exclusively uses EHRs. Throughout this process, particular consideration ought to be given to small and medium health care providers that have the most constrained budgets.
Another possible solution: The U.S. Department of Health and Human Services (HHS) Office of Inspector General (OIG) should explore the negative impacts of the Anti-Kickback Statute, which is potentially hindering meaningful industry collaborations on cybersecurity efforts.
This isn’t to suggest that AI-based technologies are a panacea for all health care cybersecurity woes, or that transforming the health care cybersecurity landscape will be hurdle-free. There are the high initial costs associated with transforming an entire sector’s infrastructure and with educating its workforce on new AI-based technologies. Furthermore, AI-based tools will only be effective if robust data and information standards are already in place. AI isn’t a one-size-fits-all solution, and administrators are right to include AI as one of many tools in their cybersecurity arsenal.
Change can sometimes feel threatening. However, while AI can help a health system deal with the coming technology crunch, it need not be seen as a threat by current IT staff. The beauty of AI-based solutions is that they don’t just save money and improve outcomes—they also liberate tech teams from time-consuming tasks, allowing them to channel their attention into the projects most critical to ensuring people’s safety.