Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
There’s not a lot of cause for optimism in the realm of securing medical devices from hacking to prevent potentially catastrophic dangers. Not only is the healthcare IT industry at least a decade behind, but many key players aren’t even aware there’s a problem. The vast majority of healthcare delivery organizations don’t have a single qualified security person on staff. However, high-profile ransomware attacks and other exploits have drawn concern from some healthcare providers who previously had their head in the sand—and change is coming.
Thanks to tireless advocacy from security-minded physicians, manufacturers, regulators, and healthcare organizations, the situation is improving. Manufacturers, tech companies, hospitals, white hat hackers, and medical professionals are hardening systems, building awareness, and creating device resiliency.
The situation remains dire, but when lives are on the line, we’ll take all the silver linings we can get. Here are a few efforts underway from people who are determined to protect your health data—and mine.
Dr. Jeff Tully, a security researcher and resident anesthesiologist at the University of California Davis Medical Center, is sounding the alarm about medical device hacking dangers. "We have an implicit trust in these types of technologies," Tully says. "We don’t ever get any cybersecurity training in medical school. It’s not something that ever comes up in our literature." That’s among the reasons why he spends his spare time educating fellow clinicians. “It’s about bringing people into the conversation that haven’t been exposed to these types of issues,” he says. So, in addition to organizing his own events, Tully presents at conferences such as DEF CON, RSA, and the HIMSS Healthcare Security Forum.
In 2017, Tully and Dr. Christian Dameff teamed up with security experts Joshua Corman and Beau Woods to organize a two-day conference called the CyberMed Summit. The conference included eye-opening simulations and tabletop exercises to help key players get a better sense of healthcare security threats. These exercises aren’t sunshine and rainbows—nightmare-inducing is more like it. However, Corman hopes the simulations create change. Once hospitals get a sense of what the scenarios they’re unprepared for might look like, they might adjust their purchasing behavior or remove faulty devices.
Tully also points out that some medical associations, such as the American College of Cardiology's Electrophysiology Council, are working to update clinicians’ knowledge base. That should help providers have useful conversations with patients about keeping their devices up to date.
The U.S. Food and Drug Administration (FDA) has drawn praise for taking a proactive role in promoting guidelines for secure medical devices. The goal is to pave the way for medical equipment that is defensible, resilient, and trustworthy.
For example, the FDA has added to its pre-market and post-market guidelines for connected medical devices, emphasizing security by design and a proactive, risk-based security approach throughout a device’s entire lifecycle. The changes encourage coordinated vulnerability disclosure, which in some cases allows manufacturers to avoid regulatory actions such as recalls if they can mitigate an issue in 30 days or fix it in 60 days.
In April 2018, the FDA released a medical device safety action plan that encourages innovation, collaboration, and good security practices.
I Am the Cavalry, a grassroots organization founded by Corman that works on the intersection of computer security and public safety, penned a Hippocratic Oath for Connected Medical Devices in January 2016. Whether by chance or by design, the FDA changes are similar to that document. Among the reasons Corman is pleased is that, per the new FDA plan, the improved devices entering the market could help smaller organizations, which often have no cybersecurity staff. Whatever the state of a healthcare provider’s in-house security team, he says, “you’ll have a better fighting chance because the equipment is slowly but surely becoming better. And even when it does fail it has a chance of getting patched or of telling you when it’s been compromised.”
Just before Thanksgiving 2017, Greg Walden, the U.S. House of Representatives Energy and Commerce Committee chairman, wrote a letter expressing support for a software bill of materials, essentially an ingredients list. Walden’s letter requested that the Department of Health and Human Services develop a plan to create, deploy, and leverage it for healthcare technologies.
A software bill of materials has two advantages. First, it empowers hospitals to purchase medical equipment from companies that have good security hygiene by comparing devices’ bill of materials to known vulnerabilities. That would influence manufacturers’ behavior and development practices, because manufacturers would know that they would have to demonstrate their use of robust systems that meet current security hygiene practices. “It’s a way to encourage these things without necessarily having heavy-handed regulations that ultimately stifle innovation and progress,” says Tully.
Additionally, a bill of materials allows hospitals to quickly respond to threat intelligence. It helps them learn if the hospital is vulnerable to an attack and which devices need to be patched or taken offline.
Even lightning-fast response times from hospitals can be too slow. Why not create devices that can do some of the heavy lifting?
Roman Lysecky, professor of electrical and computer engineering at the University of Arizona, is working on research for life-critical and safety-critical medical devices. Lysecky’s research focuses on promoting a framework that goes beyond merely following good cybersecurity practices to keep patient safety intact. The purpose is to build devices that account for vulnerabilities the manufacturers don’t yet know about.
The first research phase is funded by a three-year National Science Foundation grant. The research team, co-led by Lysecky and another UA professor, a colleague in Austria, a cardiologist at Banner Health Medical Center, and three students, is focused on malware detection within medical devices. It looks at timing information built into devices that have to perform operations at various well-defined times, such as pacemakers.
That’s just the first step. Lysecky is in the early phases of researching an automated mitigation approach. If a medical device detects an anomaly, it would maintain critical functionality but scale back features that aren’t life-essential.
Building resiliency and failsafe modes with degraded functionality will not prevent every problem. Infusion pumps rely on shared drug libraries that can be tampered with, and changing dosages (even between upper and lower bounds) can still harm patients. But if researchers can work out the kinks, the approach can prevent a significant number of life-threatening emergencies due to malicious hackers.
“The challenge with some of these medical devices is that they’re very energy-constrained,” Lysecky says. The tricky part of the research, then, is creating techniques requiring additional computation in a way that’s extremely energy efficient. So, for example, an implantable pacemaker shouldn’t burn through what’s supposed to be a 10-year battery in just five years. Lysecky describes this as a bit of a balancing act: creating devices that are capable and robust enough to detect any possible vulnerability or attempted hack, but doing so with no more than, say, a 2 percent overhead. The current approach the researchers are using has only a 1 to 3 percent overhead, depending on how you configure it, meaning it’s fairly efficient already.
The research is in the early stages, but it could eventually lead to a collaboration with a medical device manufacturer or even a startup spun out of the University of Arizona.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.