Artificial intelligence, medical devices

AI/ML-based SaMD: Trends in Litigation

By Zuhal Reed
Artificial intelligence, medical devices

This pioneering area of technology comes with new risks and questions of liability.

Artificial intelligence and machine learning-based software as a medical device (AI/ML-based SaMD) comes in two varieties—those that use locked algorithms and those that use adaptive algorithms. To date, AI/ML-based SaMD using adaptive algorithms have not been approved by the FDA, however, there has been movement towards creating an approval process. FDA approved AI/ML-based SaMD utilizing locked algorithms are already on the market and growing rapidly in number.

This pioneering area of technology comes with new risks and questions of liability. Promoting safety and mitigating potential products liability risks will require new risk management, quality management and post-market surveillance strategies. How will a medical device that is constantly changing fare in complex healthcare litigation? Will this fall under the purview of products liability? Does AI/ML-based SaMD reach the threshold of a “product” given its ever changing and abstract nature? This burgeoning new area of technology will undoubtedly enter our court rooms, but exactly how these “products” are to be analyzed under our current legal framework remains a looming question. Further, the lack of clear regulations has created a suggestion that in lieu of appropriate regulatory oversight, the litigation process may help create greater accountability for manufacturers of SaMD.

Is This Software a Product?

First, we have to tackle the issue of whether AI/ML-based SaMD can be categorized as a “product” in a products liability claim. Per the Restatement Third of Torts, to bring a successful products liability claim, the plaintiff must prove that he or she was harmed by a “product manufactured or sold by the defendant that contained a manufacturing or design defect or failed to warn of a potential safety hazard, and the product was being used in a reasonably foreseeable manner when the harm occurred.”

The Court of Appeals for the Third Circuit has examined the question of whether a software can be deemed a product. Rodgers v. Christie (795 Fed. Appx. 878 (2020). In this case, the software at issue was a multifactor risk estimation model known as the Public Safety Assessment (PSA) that allowed the New Jersey state court system to decide whether to release an inmate prior to trial. The Court of Appeals ruled that the software was not a product because it was (1) not distributed for commercial purposes and (2) it was not tangible. The software was meant to aid in the decision making process. Although this case did not involve a medical device, it does have implications for AI/ML-based SaMD. AI/ML-based SaMD with a locked algorithm aids medical care providers in deciding the best course of treatment.

Accordingly, most legal issues that arise out of injuries related to the use of AI/ML-based SaMD sound in medical malpractice rather than products liability. However, as AI/ML-based SaMD with adaptive algorithms are introduced into the market, the software itself will be providing medical care, and that will inevitably shift the legal analysis.

Avenues of Liability

Products liability claims may sound in strict liability, negligence and breach of warranty. Strict liability claims can be brought under the legal theories of manufacturing defect, design defect, or failure to warn. It is unlikely that a plaintiff will want to claim a manufacturing defect with AI/ML-based SaMD due to the abstract nature of the technology, which means these devices will deal primarily with the theories of design defect and failure to warn. One issue with a design defect claim is that such a claim will assume a fixed design. AI/ML-based SaMD using an adaptive algorithm does not maintain a fixed design, rather it is constantly changing with the use of real world data. This issue may lead for a push towards expanding the defendant’s post-sale duty to warn, placing additional post-sale monitoring burdens on both the SaMD designer and manufacturer. This issue of an ever-changing design does not plague AI/ML-based SaMD using a locked algorithm.

Claims brought under design defect theory for software using locked algorithms will either use the consumer expectation test or the risk utility test. The consumer expectation test is applied to products that are used so frequently that an underlying consumer expectation is created. All frequently used medical devices within a specific category may be deemed to be defective in design if a reasonable consumer would find it to be defective. Due to the complexity of the software and the scarcity of FDA approved AI/ML-based SaMD, there is not enough exposure to this device to create a consumer expectation yet. As a result, the consumer expectation test will likely not be applicable.

An alternative route when bringing a claim under the theory of design defect is the risk-utility test. Under the risk utility test, “a product is defective if a “reasonable person” would determine that the probability and seriousness of harm that the product will cause actually outweighs the burden or costs of taking precautions.”1 In practice, to prevail in a products liability case brought under a theory of design defect, the plaintiff would need to use the risk utility test to successfully illustrate that the risk of AI/ML-based SaMD outweighs the benefits. The court would need to consider several factors such as the potential for harm caused by the SaMD, the type of harm, the possibility of errors in algorithm, and the possibility and cost of a safer alternative design.

Lastly, a plaintiff may bring a strict liability claim under a failure to warn theory. Under this theory, the manufacturer of the software will be held liable if it can be shown that they failed to instruct or warn of a risk that was known, or should have been known, when the software was being used in a reasonably foreseeable manner. In most states, the learned intermediary doctrine is a defense available to the manufacturer. The manufacturer can argue that their duty was discharged upon the delivery of adequate instructions to the physician who made the decision to use the software. Theoretically, the usability of this defense may become an issue once AI/ML-based SaMD utilizing adaptive algorithms are released onto the market. With an adaptive algorithm, the medical device would be making the decision and treating the patient, the learned intermediary doctrine may not apply in such a case. Additionally, if the software is eventually sold directly to the consumer, this defense may become insignificant. This issue will become clearer once the technology impacts a greater number of consumers.

An alternative to strict liability is negligence. A plaintiff may bring a negligence claim against a manufacturer or distributor of AI/ML-based SaMD under a theory of negligent design or failure to warn. A manufacturer of a medical device has a duty to exercise due care in manufacturing the device. To succeed in a negligence claim, it must be shown that the manufacturer breached this duty by failing to take proper care in manufacturing the SaMD and the breach of this duty must be shown to cause an injury to the patient. The last available avenue of litigation is breach of warranty. This avenue is not feasible at this time, since AI/ML-based SaMD is not sold directly to the consumer. However, as wearables become increasingly commonplace, this may become a feasible avenue for litigation in the future.

With increasing avenues of liability, it will become important for manufacturers and distributors of AI/ML-based SaMD to consider trends in litigation when they are designing their software, when they are determining the level of involvement of the learned intermediary, and when they are deciding whether to market directly to the consumer. To get in front of these potential issues, developers of AI/ML-based SaMD must engage with regulatory and risk management experts early and often to avoid these obstacles.

Reference

  1. Risk Utility Test – Determining Design Defects”. The Pearce Law Firm.

Related Articles

About The Author

Zuhal Reed, Medmarc

Leave a Reply

Your email address will not be published. Required fields are marked *