Potential Jurors Favor Use of Artificial Intelligence in Precision Medicine

 

Physicians who follow artificial intelligence (AI) advice may be considered less liable for medical malpractice than is commonly thought, according to a new study of potential jury candidates in the United States, published in the January issue of The Journal of Nuclear Medicine (JNM). The study provides the first data related to physicians’ potential liability for using AI in personalized medicine, which can often deviate from standard care.

“New AI tools can assist physicians in treatment recommendations and diagnostics, including the interpretation of medical images,” remarked Kevin Tobia, JD, PhD, assistant professor of law at the Georgetown University Law Center, in Washington D.C. “But if physicians rely on AI tools and things go wrong, how likely is a juror to find them legally liable? Many such cases would never reach a jury, but for one that did, the answer depends on the views and testimony of medical experts and the decision making of lay juries. Our study is the first to focus on that last aspect, studying potential jurors’ attitudes about physicians who use AI.”

To determine potential jurors’ judgments of liability, researchers conducted an online study of a representative sample of 2,000 adults in the U.S. Each participant read one of four scenarios in which an AI system provided a drug dosage treatment recommendation to a physician. The scenarios varied the AI recommendation (standard or nonstandard drug dosage) and the physician’s decision (to accept or reject the AI recommendation). In all scenarios, the physician’s decision subsequently caused harm to the patient.

Study participants then evaluated the physician’s decision by assessing whether the treatment decision was one that could have been made by “most physicians” and “a reasonable physician” in similar circumstances. Higher scores indicated greater agreement and, therefore, lower liability.

Results from the study showed that participants used two different factors to evaluate physicians’ utilization of medical AI systems: (1) whether the treatment provided was standard and (2) whether the physician followed the AI recommendation. Participants judged physicians who accepted a standard AI recommendation more favorably than those who rejected it. However, if a physician received a nonstandard AI recommendation, he or she was not judged as safer from liability by rejecting it.

While prior literature suggests that laypersons are very averse to AI, this study found that they are, in fact, not strongly opposed to a physician’s acceptance of AI medical recommendations. This finding suggests that the threat of a physician’s legal liability for accepting AI recommendations may be smaller than is commonly thought.

In an invited perspective on the JNM article, W. Nicholson Price II and colleagues noted, “Liability is likely to influence the behavior of physicians who decide whether to follow AI advice, the hospitals that implement AI tools for physician use and the developers who create those tools in the first place. Tobia et al.’s study should serve as a useful beachhead for further work to inform the potential for integrating AI into medical practice.”

In an associated JNM article, the study authors were interviewed by Irène Buvat, PhD, and Ken Herrmann, MD, MBA, both leaders in the nuclear medicine and molecular imaging field. In the interview the authors discussed whether the results of their study might hold true in other countries, if AI could be considered as a type of “medical expert,” and the advantages of using AI from a legal perspective, among other topics.

No Comments Yet

Leave a Reply

Your email address will not be published.