Jothy Rosenberg, Dover Microsystems
MEDdesign

A Security Epidemic: The Internet of (insecure) Things and the Risk to Medical Devices

By Jothy Rosenberg
Jothy Rosenberg, Dover Microsystems

As medical technological capabilities increase along with the proliferation of embedded systems and IoT, cybersecurity is becoming a very real risk for the medical industry. While securing the whole medical device industry may seem like an insurmountable task, the combination of education, regulation and technology is a good first step toward protecting our most critical infrastructure from attack.

Barnaby Jack, laptop in hand, stood 50 feet away and hit return on his keyboard. With a crisp, audible pop, he “killed” the mannequin across the stage by activating the 830-volt defibrillator of the pacemaker implanted in the human stand-in. Just like the vice president portrayed on the Homeland TV show, the mannequin could have been a real person, and he would have been dead.

Medical devices are part of the Internet of Things (IoT), and they are being connected to the IoT ecosystem at a breakneck pace. A pacemaker is one such device, and it has two functions: Trickle charge the heart muscle to make it beat steadily and, in dire emergencies, fire an 830-volt defibrillator. Under no circumstances should the latter function happen if the person’s heart is still beating; it will mean certain death. Barnaby Jack showed that weak programming makes pacemakers too dangerous for IoT, and a huge pacemaker recall shows that the FDA agrees.

Far from TV fiction, the Internet of Things is driven by billions of defenseless processors in charge of many critical, and often dangerous, things. The bad guys love it as the more things connected to the net means more to attack.

Strangely enough, the processor industry itself—those companies making the technology at the heart of all our IoT devices—has inadvertently aided and abetted the attackers. For 45 years they have followed the mantra “smaller, cheaper, faster,” and have been wildly successful. Adding “secure” to their motto was never even contemplated because for the first few decades of the processor architecture, there was no internet; people didn’t need to worry about cybersecurity because an attack over a network was unimaginable and impossible.

The dirty little secret about cyberattacks is that it is the bugs in our complex software that are the open windows letting in attackers. We can’t make perfect software, and the bigger the software, the more bugs there are. And the more bugs there are, the more vulnerable we are to attacks. This fact, combined with processors that can’t defend themselves, makes software and those connected medical devices a ticking time bomb.

Fortunately, there is hope. A combination of regulations, education and technology can help secure our most critical infrastructure from attack.

The Internet of Things is heading for a dangerous “cyber epidemic”

Epidemiologists understand epidemics well. To withstand them they talk about herd immunity. From Wikipedia:

“…herd immunity is a form of indirect protection from infectious disease that occurs when a large percentage of a population has become immune to an infection, thereby providing a measure of protection for individuals who are not immune. In a population in which a large number of individuals are immune, chains of infection are likely to be disrupted, which stops or slows the spread of disease. The greater the proportion of individuals in a community who are immune, the smaller the probability that those who are not immune will come into contact with an infectious individual.”

Just substitute “cyberattack” for “infectious disease” and the concept applies to IoT. We are facing a really dangerous situation, not just for medical devices but also for everyone on the internet. Many involved with IoT see a deadly cyber epidemic coming, yet no one seems to have neither the will nor the way to build a herd immunity into IoT in order to slow or stop the impact of cyberattacks.

The problem is scale. According to the 2017 Official Annual Cybercrime Report by Herjavec Group,a full 98% of all the world’s processors are in embedded systems (not in laptops, servers, and mainframes), including IoT, cars, medical devices, hospital infrastructure, homes and more. Connectivity means that each “thing” is able to interoperate within the existing internet infrastructure, which in turn means that each “thing” can communicate with every other internet-connected device planet-wide. This is a lot of connected things: Current estimates predict that IoT will have 30 billion objects hanging off of it by 2020.

In tandem, cybercrime damage costs are slated to hit $6 trillion annually by 2021, an increase from $3 trillion in 2015, according to Cybersecurity Ventures. As the head of the NSA noted in an address at the American Enterprise Institute in Washington, D.C. several years ago, cybercrime represents “the greatest transfer of wealth in history.” There is a cyber attack every 39 seconds. Meanwhile, there are almost four million records stolen from breaches every day, 159 thousand per hour, 2,600 per minute, and 44 every second of every day.

In this connected world, the data that powers our operations is equivalent to oxygen: even minor deprivation can have disastrous results. In early January 2017, FDA issued a safety communication related to St. Jude Medical’s Merlin@home transmitters, which were found to be vulnerable to attacks by unauthorized users. The company developed a patch to address this vulnerability. This problem came in the wake of a report released in 2016 by Muddy Water Capital, “MW is Short St. Jude Medical (STJ:US)” claiming that a wide array of pacemakers and other devices were vulnerable to attack and posed a serious threat to patient safety.

Such attacks that take over a device or system across the internet are possible for two main reasons: The processors we use are all defenseless, and the software that these processors execute has many bugs that attackers can exploit.

Processor Industry Inadvertently Aids and Abets Attackers

The architecture of computer processors is conceptually the same as when they were designed by the mathematician and physicist, John von Neumann, in 1945. The von Neumann processors use such a simple architecture that the industry was able to go on a tear of making them smaller, cheaper and faster. This led us to Moore’s Law which, though not really a law, said that the number of transistors on a microchip would double every 18 months. Moore’s Law made the security problem worse. Sure, the machines got much smaller, cheaper and faster. And, yes, this meant they could do more complex tasks more quickly, but security was never part of the equation.

According to Writing Solid Code author Steve Maguire, today there are consistently 15 bugs per thousand lines of code. Ten percent of these bugs, according to the FBI, are exploitable by a determined attacker. To put that into context, once pacemaker has nearly 100,000 lines of code.

So why can’t we just design or code our way out of this problem? That only works for things that are built many times and rarely or never change—things like bridges, for example, where the bricklayer or steel worker can break problems into repeatable steps that can be done in isolation, with accurate and dependable results. But computer programs are not bridges; they are iterative and constantly evolving. Besides, humans have been building bridges for more than 4000 years—since the Greek bronze age—but software engineering is still in its infancy (generously only about 70 years old) and not a mature discipline on which we can develop bug-free software.

The fact is, we have been stuck with flawed software for a long time, and the bad guys seem to have figured out how to prey on this vulnerability with very small pieces of malicious code. Our flawed software is like a 100-window house with one unlocked window. Care was taken to lock 99% of the windows, but just that one opening—that single vulnerability—is all an attacker needs.

While securing a whole industry of medical devices in the face of these industry-wide challenges may seem like an insurmountable task, there are a few ways we can start the process.

Start with Education

First and foremost, individuals need to raise awareness that there is a problem in the first place. Many industry professionals and regulators may not see security as a priority because they do not fully understand what exploits are possible and the magnitude of the problem. It’s hard to argue with the above-numbers about the depth and breadth of this issue! Outreach and education to those in CISO, CTO and engineering positions is a good first step in bringing the issue to the forefront.

Regulations Needed

As it turns out, for being such a tightly regulated industry, medical devices have somewhat lax regulations when it comes to cybersecurity. This may be a product of the nascent nature of embedded systems in medical devices or a lack of ownership between the device manufacturer and regulatory bodies. As it stands, the FDA works with the Department of Homeland Security (DHS) to ensure that all devices produced are able to protect themselves against a cyber attack.

In an FDA fact sheet relating to cybersecurity the administration claims, “Medical device manufacturers can always update a medical device for cybersecurity. In fact, the FDA does not typically need to review changes made to medical devices solely to strengthen cybersecurity” (FDA Fact Sheet). The reality is that patching only goes so far to protect systems. To effectively patch an exploit, the device maker must first identify the cause, develop a patch, and push it out to all devices in a short period of time. A difficult task.

To truly make an impact, we need to pursue an increase in regulatory requirements for device manufacturers and clearly define responsibility for enforcement. This ensures both the regulatory bodies and the manufacturers are on the same page when it comes to cybersecurity.

Technology Innovations Can Help

Lastly, we can turn to new forms of security to lock down critical pieces of medical equipment. This is not a software fix however, but rather a hardware-based approach at the processor level. How can we make that happen?

We need to provide the processor with more information about the application. Every application is made up of a collection of functions (also called routines) where each function typically does one relatively small task and then it returns to the higher function that called it. The call graph is the exact hierarchy of what functions call what other functions and in what order. This is important because one favorite type of attack is to hijack the return from a function so that it will instead go to the attacker’s code. When compilation is complete, the call graph is not preserved, but it needs to be. There is also some equally critical information generated while the application is running, but it too is thrown away and not made available to the processor. Both these types of “extra” information about the application provide valuable insight to the programmer’s intent. We call this information “metadata.” Our research led us to create metadata about every instruction and every location in memory, which enables special protection circuitry to help an application processor “do the right thing”—even in the face of bug-ridden software and cyberattacks.

We also need a way to describe the things we want checked and enforced, whether they are security, privacy or safety related. We call these descriptions micropolicies. As the micro prefix implies, they are small—as in tens or dozens of lines of code rather than millions. Small means it is much more realistic to verify their correctness. Micropolicies are really just a set of rules that describe things you want to verify about the state of the system as each machine instruction is executed. By preserving the compiler information and this run-time information—that is, the metadata—we can enforce critical security rules like “do not ever allow a buffer to be overwritten or “make sure every call and every return from a function only goes where the program intended it to go.”

The third thing we need is a way to instantly stop processing when a problem occurs—before any damage is done. We call the mechanism to do this a hardware interlock. It has to be hardware because, unlike software, hardware is unassailable over a network. The simple idea here is to watch as each and every instruction is executed, and to use all appropriate metadata to apply relevant policies and identify any violations. If everything is A-Okay, let execution proceed normally; if there is a violation, do not allow the instruction to complete, and handle the exception safely and appropriately.

Finally, we need to accomplish these goals using today’s processors. We can’t develop and implement a more secure processor architecture overnight. It will happen, but it will take decades, and we can’t afford to wait. In the interim, we need to bolt something onto our existing processor technology to provide the security it is so sorely lacking.

As medical technological capabilities increase along with the proliferation of embedded systems and IoT, cybersecurity is becoming a very real risk for the medical industry. While the solution may be a combination of regulations, education and technology, the fact stands that we must act quickly to secure our most critical infrastructure from attack. Beginning the security conversation is the first step of the process. By bringing awareness to the issue and educating the right audiences, we as an industry can begin to put the right pieces and processors in place to ensure the health and safety of patients as the opportunities for connected medical devices expands.

About The Author

Jothy Rosenberg, Dover Microsystems