ABSTRACT
Autonomous AI systems in medicine promise improved outcomes but raise concerns about liability, regulation, and costs. With the advent of large-language models, which can understand and generate medical text, the urgency for addressing these concerns increases as they create opportunities for more sophisticated autonomous AI systems. This perspective explores the liability implications for physicians, hospitals, and creators of AI technology, as well as the evolving regulatory landscape and payment models. Physicians may be favored in malpractice cases if they follow rigorously validated AI recommendations. However, AI developers may face liability for failing to adhere to industry-standard best practices during development and implementation. The evolving regulatory landscape, led by the FDA, seeks to ensure transparency, evaluation, and real-world monitoring of AI systems, while payment models such as MPFS, NTAP, and commercial payers adapt to accommodate them. The widespread adoption of autonomous AI systems can potentially streamline workflows and allow doctors to concentrate on the human aspects of healthcare.
ABSTRACT
Medical professionals are increasingly required to use digital technologies as part of care delivery and this may represent a risk for medical error and subsequent malpractice liability. For example, if there is a medical error, should the error be attributed to the clinician or the artificial intelligence-based clinical decision-making system? In this article, we identify and discuss digital health technology-specific risks for malpractice liability and offer practical advice for the mitigation of malpractice risk.