Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Lancet Digit Health ; 6(9): e662-e672, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39179311

RESUMEN

Among the rapid integration of artificial intelligence in clinical settings, large language models (LLMs), such as Generative Pre-trained Transformer-4, have emerged as multifaceted tools that have potential for health-care delivery, diagnosis, and patient care. However, deployment of LLMs raises substantial regulatory and safety concerns. Due to their high output variability, poor inherent explainability, and the risk of so-called AI hallucinations, LLM-based health-care applications that serve a medical purpose face regulatory challenges for approval as medical devices under US and EU laws, including the recently passed EU Artificial Intelligence Act. Despite unaddressed risks for patients, including misdiagnosis and unverified medical advice, such applications are available on the market. The regulatory ambiguity surrounding these tools creates an urgent need for frameworks that accommodate their unique capabilities and limitations. Alongside the development of these frameworks, existing regulations should be enforced. If regulators fear enforcing the regulations in a market dominated by supply or development by large technology companies, the consequences of layperson harm will force belated action, damaging the potentiality of LLM-based applications for layperson medical advice.


Asunto(s)
Inteligencia Artificial , Humanos , Estados Unidos , Atención a la Salud , Unión Europea
2.
NPJ Digit Med ; 6(1): 227, 2023 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-38062115
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA