Human factors, use error

Medical Device Use Error: Focus on the Severity of Harm

By Allison Strochlic, Michael Wiklund
Human factors, use error

Severity of harm is what ultimately matters most in terms of protecting patients from the consequences of use errors.

In the course of medical equipment design, the challenge is ensuring the safety and effectiveness of medical equipment as a whole. It’s one thing to make sure that an electromechanical component will function reliably and safely over many years (e.g., for one million cycles). It’s something else to prevent use errors—another type of failure—and make sure that people will not err when operating a medical device. Certain use errors have the potential to cause serious harm. This reality has led FDA and other regulators to call upon medical device manufacturers to mitigate the risk of use errors that could cause significant harm, regardless of their likelihood.

A task analysis of any medical device of even moderate complexity is likely to reveal hundreds of potential use errors (i.e., mistakes). Consider, for example, a glucose meter that people with diabetes use to test their blood sugar levels. A granular analysis of the initial and seemingly simple task of inserting a blood test strip into the meter reveals dozens of potential use errors, including the following:

  • Insert the wrong type of strip (ordered the wrong ones online)
  • Insert a strip that has passed its expiration date into a meter that is programmed to reject an expired strip, a used strip or the wrong strip
  • Insert the test strip backwards (blood deposition end first instead of electrode end first)
  • Insert the test strip upside down (blood deposition area label facing downward)
  • Handle the test strip with hands still wet with antibacterial gel and contaminate the blood deposition area
  • Insert the test strip into the cable connector port instead of strip insertion slot, thereby damaging the strip’s electrode

It’s easy to see how a list of all foreseeable use errors becomes lengthy when all intended device uses and even some unintended uses are considered as part of a comprehensive use-related risk analysis. But, some use errors will pose a much higher risk than others and should be of greatest concern to medical device manufacturers, just as they are to regulators and device users. The focus needs to be on the critical failures.

Medical device manufacturers have traditionally calculated the risk of potential use errors the same way as they calculate risk in other areas, as the numerical product of two ratings: One representing likelihood (i.e., frequency) and the other representing severity of harm. The result has been “risk priority numbers” (RPNs) indicating if the risk posed by a particular potential use error is acceptable or not, the latter indicating the need for further risk control (i.e., mitigation). Today, this approach to estimating the risk of use errors is considered outdated.

The problem with the conventional approach is that it is quite difficult to estimate the likelihood of a use error occurring. While you can operate a bunch of switches constantly for days or weeks to determine a failure rate, you cannot do the equivalent to determine use error rates. Certainly, a usability test involving a relatively small sample of intended users will reveal some use errors. But, you cannot ask 10,000 people to program an intravenous infusion pump to deliver a continuous infusion to a patient to enable you to count the few times that a rare use error will occur. Moreover, there are no tools that can predict use error rates for specific individuals using a specific medical device’s user interface in a specific use environment. Because of these inherent conditions in usability testing, error likelihood should be disregarded as a driving factor determining the need for risk mitigation.

Instead of calculating the risk posed by a particular use error by the conventional means of multiplying likelihood and severity estimates, focus on severity and consider likelihood as only a “valued added” piece of information that could be useful when assessing the level of residual risk after all necessary risk control measures are implemented.

The logical breakdown of determining risk without actually considering likelihood is that, technically speaking, it is no longer risk. Remember, risk = likelihood rating x severity rating. Without a likelihood rating, an essential multiplier is missing. But, for practical purposes, that’s acceptable.

The updated (i.e., modern) way to approach use-related risk analysis is to base decisions about the need for risk control measures on the severity of the harm that can result from a use error, regardless of whether the likelihood is 1 in 100, 1 in 1,000, or 1 in 10,000. Taking this approach neutralizes the potentially detrimental effects of making wildly incorrect estimates of likelihood; what might be considered “shots in the dark” that could have varied by several orders of magnitude among the individuals doing the “shooting.”

Remember, risk = likelihood rating x severity rating.

Arguably, the severity of harm resulting from a use error is easier to rate than the likelihood of a use error. Severity ratings can be based on medical judgments regarding the consequences of events such as a needlestick injury, unintentionally suspending the infusion of a blood pressure drug, or contaminating a blood sample with antiseptic gel. Consequently, if there is a potential for serious harm, there is a need for risk mitigation. Simple enough?

Well, unfortunately, it’s not that simple. It can be difficult to pinpoint the severity of harm because different users might suffer varying degrees of harm as a result of the same use error. Consider the needlestick injury. In various individuals, the consequence could be limited to pain and a trivial amount of blood loss, or it could lead to a minor infection that resolves itself, or it could lead to a major infection that requires aggressive antibiotic treatment, or it could ultimately lead to a systemic infection (i.e., sepsis) and death. Also, in a particularly troubling scenario, a needlestick injury could transmit a blood borne disease (e.g., tuberculosis, HIV) from a patient to a caregiver. So, it’s actually not so simple.

But, the focus on severity makes a lot of sense with regard to the pursuit of user interface design excellence, and safe and effective medical devices. This is because severity of harm is what ultimately matters most in terms of protecting people from the consequences of use errors.

The main point is not to dismiss a use error from further consideration of possible risk control measures just because it is highly unlikely. The potential consequence of a use error (i.e., the associated harm), even if the use error seems highly unlikely to occur, is what matters in the business of following the FDA’s and other regulators’ human factors engineering guidance.

Related Articles

About The Author

Allison Strochlic, UL

About The Author

Michael Wiklund, UL