Building the Clinical Risk Perspective into Medical Device Manufacturing

By Thomas Maeder

The evaluation and management of risk throughout the product life cycle is the single most important concept in the regulation of medical devices. Yet it is difficult to define precisely what “risk” means or how to assess it in an industry where some 115,000 devices are produced by thousands of manufacturers, employing a dizzying array of technologies destined for use in varying health settings for every imaginable indication. So who determines the risk, and where do problems typically arise?

The medical device field is noted for rapid innovation and continuous product improvement, but efforts to make beneficial changes—devices that are better, faster, smaller, easier to use, less costly to manufacture, with new features or broader applications—may inadvertently introduce clinical risk factors invisible to engineers.

“The sophistication of how we assess risk has evolved, both at the Agency and in what we expect from firms,” says Kimber Richter, M.D., Deputy Director for Medical Affairs in the CDRH Office of Compliance. “There is flexibility allowed because of the nature of devices,” says Dr. Richter, “but we do get involved when there are recalls, and if it’s not properly done, we advise firms on where their views have strayed from ours.”

The requirements and procedures for determining risk are clearest at two points in the product life cycle, during premarket regulatory classification and the compilation of data to support submissions, and in the event of field actions when something goes wrong.  Health Hazard Evaluation (HHE) and Health Risk Assessment (HRA) are procedures used by FDA to evaluate patient or public health risk and to guide an appropriate response. But recalls are only the most extreme point at which patient safety considerations come into play. 

Lessons from Guidant’s Cardiac Rhythm Management division

The problems of Guidant’s Cardiac Rhythm Management division, though several years old, remain instructive because of the exhaustive analysis and conclusions by an independent panel. Design modifications to shrink the size of an implantable defibrillator had the unforeseen consequence of permitting electrical arcing across the smaller space between critical components. According to Guidant procedures, known or alleged malfunctions were reviewed by a Product Performance Engineer (PPE), who determined the need for further investigation or tests, conducted trend analyses, and referred the problem, if necessary, to a Product Performance Committee. Opening of a trend analysis also triggered a risk assessment, and if non-negligible health risks were suspected, a formal Health Risk Assessment was performed.

Problems arose for various reasons: PPEs were underappreciated, lacked clinical orientation, and were not required to include medical personnel in their assessments. Consistent procedures did not exist for escalating product failure and health risk information to senior management; rather, decisions were made at various levels by ad hoc teams with considerable latitude in the methodologies and criteria they employed. PPEs might base decisions on statistical analyses of trends, without understanding the real world environment where products were used and the possible consequences of failures. One PPE chose not to reopen a trend after a short-circuiting problem was solved through redesign, ignoring the fact that uncorrected devices were already implanted in patients.

Interestingly, and disturbingly, company personnel were largely oblivious of the lack of clinical input. While design engineers, PPEs, and members of the health risk and product performance committees all felt that they had ready access to medical expertise from in-house clinicians, an independent panel discovered that these clinicians, however, were all assigned to clinical trial design, physician education and communication, or work in the animal lab. None had patient safety as a specific job responsibility. A key recommendation of the panel was that a medical officer be identified “whose primary role is to serve as an advocate for patient safety, risk assessment, and post-market surveillance.”

Inadequate clinical oversight rarely appears as a specific observation in FDA warning letters, but is often implied by critiques of MDR reporting or of design controls, notably 820.30(g), requiring that design validation “shall ensure that devices conform to defined user needs and intended uses and shall include testing of production units under actual or simulated use conditions.” Warning letters citing this section mention, among other problems, product safety issues arising from widespread common practices that varied from the manufacturer’s expectations, collection of insufficient patient information on complaints to determine whether safety issues existed, risk stratification not linked to probability of harm, attribution of problems to user error or inadequate user training, and performance issues that seemed acceptably rare to engineers but that had, in fact, resulted in potential serious injury or death in patients.

Handling of complaints a good indicator

FDA inspections typically begin with the complaint file. Not only is a company’s handling of complaints a good indicator of its understanding of and ability to implement the Quality System Regulation in an appropriate manner, but the procedures and reasoning behind complaint handling demonstrate the types of product issues that arise, how they are evaluated, what criteria and procedures are used to resolve complaints or to escalate them to the level of reportable events and, when necessary, to CAPA or even a recall. A critical element in determining the significance of complaints is the clinical perspective.

“There are two aspects to complaints,” says Alan Cariski, M.D., J.D., Vice President of Worldwide Medical Affairs and Medical Safety Officer at LifeScan. “One is the purely formal complaint handling system, where you have a way to take in the complaint, conduct an investigation, and decide about reporting. But even if you satisfy all of those formal requirements, you have to show that you’re doing something with the data that you’re getting back. Trending alone may not be adequate. Sometimes, for a unique event that’s important, one time is enough to tell you that there’s a problem, and you need someone to recognize that. To have a lay person pick up the phone, take the complaint, and decide if it was an adverse event that should be reported, is unreasonable,” explains Cariski.

The majority of medical device companies are relatively small, and few have full time staff clinicians. Innovations may come not only from existing device manufacturers or clinicians, but from technologies transferred from other fields and developed by skilled engineers with backgrounds in the aerospace, telecommunications, automotive, or other industries, knowledgeable about FMEAs and fault tree analysis, but unfamiliar with the nature of clinical practice, off-label use, the variability of patients, and a host of other complicating factors.

“I was an engineer in college, and I know how gratifying it can be to make decisions based on statistics,” says Gaurang Patel, M.D., Medical Director, Global Pharmacovigilance & Epidemiology at Cephalon (Frazer, PA), and formerly the Medical Director, Complaint Handling & Safety Surveillance at Cordis (Bridgewate, NJ). “But physiological problems don’t always translate into statistics, and you can’t always make decisions based on numbers. Sometimes FDA doesn’t focus on the smaller companies, and their mistakes slide under the radar. But companies grow and people lose track of some of the fundamentals and neglects the regulations. But doctors understand that their decisions have far-reaching implications, so they don’t make cavalier decisions.  If they don’t know something, they will scream for help.”

“FDA understands that products will fail,” says Stephen Terman, Principal Attorney at Olsson, Frank & Weeda, and former Associate Chief Counsel for Enforcement at FDA. “You can’t make perfect components or products that work 100 percent of the time. They will fail. The question is, when they do fail, what do you do? Do you immediately go into the field and fix all of the problems? With defibrillators, we had a failure rate of 0.003.  That’s very small. If you have 100,000 products in the field and one failure, do you need to recall the rest? You need clinical input to assess the risk of the failure mode and the appropriate response. A company with that failure rate, even with a serious health consequence, may not think that they need to fix 99,999 products. But FDA’s mantra is: ‘One death is too many’.”

Assessing the reasonable worst case scenario

“Kimber Richter has told us to ask ‘What’s the reasonable worst case scenario if this defective product were to be used clinically?’” says Edward C. Wilson, a partner at Hogan Lovells. “How is a person or a team that doesn’t have clinical expertise going to make that analysis or assessment? They know what it’s cleared or approved for, what its labeling says. An engineer can understand mean time to failure or have models to predict failure, or know when a device doesn’t meet specifications or the parts fit poorly and leak, which is all important, but I don’t know that they’re trained to know what’s the worst that could happen in a clinical environment.”

Engineers working in a clinical vacuum may not even be aware of certain risks – the possible transmission of blood-borne pathogens, the chance of air embolism, the constellation of other devices and products being used by clinicians simultaneously, or work-arounds doctors may routinely employ. In addition, suppliers – and suppliers’ suppliers – often fail to appreciate the strict rules for design modifications, and without careful oversight by the ultimate manufacturer and a risk-based assessment of changes, well-intended tinkering that would be innocuous or praiseworthy in other industries may pose unacceptable safety issues in the medical products world.

“You can’t get a black box and ensure that your product has the quality you need,” says Wilson. “It’s going to be hard to educate all suppliers on clinical impact, because they can’t internalize it. Some suppliers don’t necessarily want to know where their products are going: they don’t want to make warranties, they don’t want to get sued. So it’s up to the manufacturer to maintain rigorous controls over those suppliers, especially those making critical components, and to instill the importance of change notification.”

A culture of patient safety

Regular, appropriate clinical input and review is important throughout the product lifecycle. What is “appropriate” varies according to the product and the company. The key, however, as Thomas Morrissey, M.D., Vice President of Product Safety at Edwards Lifesciences, has repeatedly stated in public presentations, is creating a culture of patient safety in which the clinical perspective on patient and public health risk serve as a beacon that guides post-market activities.

In a recent Regulatory Focus article, Morrissey suggests that the principles involved in Health Hazard Evaluation – though technically a term applied by FDA only to recall evaluations – might serve as a useful conceptual template for complaint analysis and escalation, CAPA, and other post-market activities, helping wean the decision makers within the company from a rote filling-out of forms just to satisfy regulations, and bridging the conceptual gap between product malfunction or non-conformance and a clinical safety issue.

“In small companies it may not be feasible to have someone on staff full-time when they’re needed only 10 to 20 percent of the time,” advises Dr. Patel. “But there should always be one point of contact in the company who is very compliance- and safety-oriented.  That individual needs to be very visible, and must have the credibility to make issues known to upper management.”

Related Articles

About The Author

Tom Maeder, MedTech Intelligence