Saltar a la navegación Saltar al contenido principal Ir al pie de página

Medical Devices: A Hardware Security Perspective

17 mayo 2023

By Jameson Hyde

Medical device security is gaining more attention for several reasons. The conversation often gets connected to device safety, that is, the degree to which the risk of patient harm is limited by preventing or controlling for device malfunction. Device security expands the scope of safety by supposing a malicious attacker is causing or exploiting the device to malfunction. If it’s not secure, it’s not safe.

The threat model for medical devices has changed over time. Previously, a lack of connectivity, the inherent physical controls in a medical facility or within one’s home, and the generally high cost and low availability of these niche devices for research all limit the exploitability of any lingering vulnerabilities.

However, the advancement of connected health highlights the importance and subtle distinctions associated with securing the ever-growing set of medical devices. At NCC Group, we have previously written about the importance of securing any connected device. In this blog post, we specifically focus on medical devices from a hardware security researcher’s perspective. We go on to discuss how the regulatory landscape attempts to align with this view.

A Unique and Varied Threat Model

While the threats and corresponding mitigations that apply to a medical device may resemble that of any typical hardware threat model, at a minimum, the severity associated with a typical vulnerability class is likely to differ, potentially significantly. A remote code execution exploit is just as likely to be critical for a medical device as it would be with any other IoT product, but the impact of lesser vulnerabilities may be increased as well. Furthermore, because of the specialized problems that healthcare devices are designed to address, correspondingly unique threats may become apparent. A traditional goal of an adversary may be complete device compromise, running arbitrary and potentially malicious software on the device, but note the stakes are often higher in the case of medical devices. Gaining control of some sensor or actuator may have serious impacts, but that impact is more perilous when the device is interfacing with a human body.

In some contexts, a user’s personal data can be valuable to an adversary, but patient healthcare data is inherently more so. User data leakage is historically treated according to the value of the data and breadth of the incident – credit card numbers and passwords are serious, and usernames are typically less so depending on the circumstances. Leaking so much as a patient’s name carries more weight when it effectively maps that individual to a medical condition or some immutable attribute of theirs. Patient data is tightly coupled with the core functionality of many medical devices. The attack surface may be significantly greater as a result if operation of the system requires this data to be more exposed, whether it must be stored persistently, communicated between multiple processes over some IPC channel, or transmitted to a remote back end. Certain aspects of user privacy are too often an afterthought in the consumer technology space, but medical information is one asset where users/patients expect a much greater degree of scrutiny and protection. To that effect, user privacy is often treated as a separate entity or extension of device security. Data can include health conditions, family history, routines and activities, genetic information, and perhaps soon, even one’s own thoughts. Such a privacy breach is therefore justifiably covered by longstanding laws in the US and EU, presenting more serious consequences to the manufacturer as a result.

For another example, consider a denial-of-service vulnerability. The severity of these weaknesses is often deemed less significant, and the risk of exploitation is acceptable for many products. This evaluation changes dramatically for a battery-powered medical implant. For example, a denial-of-service could result in serious harm to the patient due to missed treatment. Furthermore, even if patient harm is not an immediate concern, additional surgery to replace an unrecoverable implant may be necessary, which is much more significant than replacing some unusable office equipment.

The above examples highlight some of the unique threats inherent with many medical devices. Several other fairly common factors skew the threat landscape further.

Attack Surface

Not long ago and still reasonably common today, a typical medical device design pattern may involve recording patient data or administering some therapy, and then connecting physically to an offline host machine in a physically secure clinic for medical staff to interact with the device thereafter. A predictable evolution of such a device would be to add wireless connectivity, thus supporting interaction at an arbitrary location – common implementations include the use of an Internet connection and a mobile or web application. With that functionality comes additional attack surface not previously considered in the prior generation. Note a unique aspect in this circumstance: Such a device is likely to need to connect to a hospital/clinic network potentially exposing it to everything else on that network and vice versa. That same device may split time on patients’ home networks as well, further exposing it to a variety of network-based threats.

As with other heavily regulated industries, the mere concept of updateable software and firmware is somewhat novel in the medical device space. It is relatively straightforward to develop firmware for a device that is effectively immutable, test it extensively, and then expect to only update it in the event of a recall, perhaps involving a specialized technician performing that update on-site or swapping out the entire device. This model falls short knowing that new vulnerabilities, including those found in third-party software, are discovered with increasing frequency, a fact exacerbated by the growing complexity of these devices. As such, the capability for over-the-air updates is now widely expected for any device with a network connection. Naturally, this feature happens to be a prime attack vector if it is poorly designed or implemented.

Users and Permissions

Multiple types of users may interact with these devices, namely: patients, caregivers, medical staff, hospital IT administrators, not to mention OEM administrators or service technicians as well. This is quite distinct from, say, a smartphone that is almost exclusively accessed by its user/owner for its entire lifetime. The authorization of these roles to specific application and device capabilities can quickly become complex and an easy target for privilege escalation. Over their lifetime, some devices may be used by multiple patients, presenting further possibility of patient data exposure even after treatment is concluded. This is akin to second-hand device and rental car scenarios, highlighting the importance of a factory reset function that securely erases all patient data and reliably returns the device to a known good state. For a medical device, flash memory that is not securely erased may be re-deployed to a new patient while still containing the old home Wi-Fi password, credit card number, or private medical information of the previous patient. Obsolete equipment sold as surplus may still contain the means to access mission-critical hospital infrastructure.

Longevity and Maintenance

Speaking of product lifetime, medical devices are often expected to be supported longer than the oft-seen three-to-five-year window for many consumer IoT products. Managing software updates for that duration is challenging, requiring careful management of vendor relationships, third-party patches and firmware deployment, all in a timely manner. Some third-party software components may fall out of support and significant changes to software dependencies may qualify as significant enough design changes to trigger regulatory re-approval. This all presents an environment where stale software is more likely to exist on the product as it ages. While known critical vulnerabilities may be addressed appropriately, those that are low severity or in deeply embedded dependencies may be overlooked. Regulators require a defined software bill of materials (SBOM) for medical devices, expecting OEMs to demonstrate that they are at least aware of everything included in their device, but maintaining (and patching!) all of that software is just as important and potentially more challenging for the pieces out of their direct control.

Complexity and Novelty

Another aspect of complexity applies to the device design itself. While not an absolute, consumer IoT devices and commoditized products are less likely to deviate much from a vendor reference design, perhaps using a known SoC and baseband combination with established security configuration features and relatively little custom hardware, along with a vetted Board Support Package (BSP) including an OS, drivers, bootloader, and libraries that implement the core security features of the product. Many medical devices are attempting to solve niche problems with significant constraints using creative solutions, and this may require custom board or FPGA design, a novel algorithm that accepts external inputs, or some unique communication interface to get data from the patient’s body to an appropriate destination, all of which have been subject to less scrutiny than required.

Finally, there exists an economic/business factor that may be of interest to a security researcher. The medical device environment is, in many cases, one of pure research and engineering. In cases where a device originates in an academic or startup environment, costs will be constrained, and security is more likely to be deprioritized in favor of development of the novel medical aspects of the technology. Instead of some incremental improvement on a common device, these are often first-generation devices with yet-to-be-seen behaviours and potential misbehaviours.

Classification

There exists several proposed and conventional criteria to classify medical devices for regulatory purposes or otherwise depending on the context. Patient safety, the existence of a similar design (i.e., the novelty of the function), the range of the connectivity interfaces used, the physical environment (hospital, home, public), and ownership of the device all serve to classify device types. A networked MRI machine is markedly different from a portable EKG in many ways.

Interestingly, these classifications also serve as guidance for a security researcher, providing an additional rough measure of the threats associated with the device. Higher potential for patient harm (e.g., a class III or IV device) significantly escalates impact of even the most limited vulnerabilities like the ability to degrade battery life.

More user roles may also suggest a greater likelihood of privilege escalation or authorization bypass. Here, an example problem may be a vulnerability that unlocks device calibration capabilities that would otherwise be accessible only to a service technician.

A device’s connectivity, say whether it connects via BLE to a mobile application or relies on a hardwired ethernet connection for all interactions, has a clear mapping to attack surface. Devices that stay in a clinic or are attached to a patient in their daily life helps establish the likelihood of physical threats to the device. Finally, novel products are potentially more likely to have custom, under-scrutinized software or a class of vulnerability that may not have been considered.

Regulation Landscape

The FDA, EU, and other global medical device regulators are beginning to acknowledge this complexity. Of course, there was a time when these regulatory bodies did not have to concern themselves with medical devices as we know them today, much less internet-connected ones that may be reprogrammed to operate in an unintended manner. While we have discussed the set of common traits that tend to distinguish medical devices from other hardware in general, there remains a challenge in effectively evaluating these devices to be safe for use and therefore secure from attack given the variety of threats that they face.

Regulators have traditionally relied on post-market response to address most cybersecurity concerns – patching or recalling products as necessary. Recognizing that this can be expensive, skews incentives, and may be insufficient to address all security concerns, premarket guidance and requirements have more recently been a focus of regulation in the industry. Compliance of this nature presents a difficult trade-off of applying a general process to a diverse set of products and have it be effective enough. Usually this leads to a low bar to meet – a checklist step in QA just prior to product release. Of course, this is neither appropriate nor desirable in a market where the impact to the users/patients can be so significant.

A naive approach to objectively measuring device security is to focus on the final product – beginning with a set of requirements that can be applied to a broad set of devices based on a general threat model or profile developed by a standards body. This attempts to provide some fundamental measure of the security of the product and its users based on whether those requirements are met or not. The benefit to such a method is it avoids encumbering the OEM, keeping “security” and “certification” to a relatively known cost and effort to fit nicely in the development schedule running alongside QA.

Medical device regulation, such as that established in the FDA’s guidance for medical device security, predominantly takes a more comprehensive “show your work” approach like that of other burgeoning programs related to hardware security. This guidance defines a Secure Product Development Framework (SPDF), which includes many intersection points with the product development lifecycle – an initial risk assessment, ample architecture documentation, product security requirements, threat modeling, and informed testing. These can take months of effort instead of weeks to do properly, but more effectively account for the unique threats applicable to a connected health product and the appropriate controls to address them. Recently, the FDA has stated that 510(k), pre-market authorizations, and De Novo applications for medical device approval may be refused on the grounds of insufficient evidence of the product being secure, so it is important for manufacturers to take this seriously.

In 2019, the EU provided similar guidance to serve as a precursor to more explicit regulation. The MDR states the single mandatory requirement related to device security:

For devices that incorporate software or for software that are devices in themselves, the software shall be developed and manufactured in accordance with the state of the art taking into account the principles of development life cycle, risk management, including information security, verification and validation.

The requirement itself is vague, but the accompanying guidance clearly establishes a similarly high expectation regarding the assessment and control for the security threats posed to any prospective device.

Conclusion

Now, more than ever, medical devices present an interesting landscape of potential targets for security research. They are becoming more connected (and so more exposed to attack) and more capable of accomplishing amazing and life-changing healthcare outcomes for their users (and so more impactful if they are made to do otherwise). While these medical products often employ the same underlying technologies and hardware security principles as other IoT products the set of threats associated to the devices, their users, and their OEMs are often unique or significantly different. Regulators recognize this fast-changing environment and the need for intelligent, holistic, and qualitative secure development methodologies applied throughout the product lifecycle.