Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 275
Filtrar
1.
Crit Care ; 28(1): 301, 2024 Sep 12.
Artigo em Inglês | MEDLINE | ID: mdl-39267172

RESUMO

In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field.


Assuntos
Inteligência Artificial , Humanos , Inteligência Artificial/tendências , Inteligência Artificial/normas , Cuidados Críticos/métodos , Cuidados Críticos/normas , Tomada de Decisão Clínica/métodos , Médicos/normas
7.
JAMA ; 332(10): 789-790, 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39133500

RESUMO

This Viewpoint discusses a suggested framework of local registries to record and track all health artificial intelligence technologies used in clinical care, with the goal of providing transparency on these technologies and helping speed adoption while also protecting patient well-being.


Assuntos
Inteligência Artificial , Saúde Digital , Sistema de Registros , Humanos , Inteligência Artificial/normas , Sistema de Registros/normas , Saúde Digital/normas , Guias de Prática Clínica como Assunto , Governo Federal , Estados Unidos , Avaliação de Risco e Mitigação/normas
8.
JAMA ; 332(10): 787-788, 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39133493

RESUMO

This Viewpoint highlights the potential for artificial intelligence (AI) health care tools to introduce unintended patient harm; calls for an efficient, rigorous approach to AI testing and certification that is the shared responsibility of developers and users; and makes recommendations to inform such an approach.


Assuntos
Inteligência Artificial , Certificação , Saúde Digital , Informática Médica , Humanos , Inteligência Artificial/legislação & jurisprudência , Inteligência Artificial/normas , Informática Médica/legislação & jurisprudência , Informática Médica/normas , Estados Unidos , Segurança do Paciente/normas , Saúde Digital/legislação & jurisprudência , Saúde Digital/normas
9.
Mil Med ; 189(9-10): 244-248, 2024 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-39028176

RESUMO

Artificial intelligence (AI) has garnered significant attention for its pivotal role in the national security and health care sectors. However, its utilization in military medicine remains relatively unexplored despite its immense potential. AI operates through evolving algorithms that process extensive datasets, continuously improving accuracy and emulating human learning processes. Generative AI, a type of machine learning, uses algorithms to generate new content, such as images, text, videos, audio, and computer code. These models employ deep learning to encode simplified representations of training data and generate new work resembling the original without being identical. Although many AI applications in military medicine are theoretical, the U.S. Military has implemented several initiatives, often without widespread awareness among its personnel. This article aims to shed light on two resilience initiatives spearheaded by the Joint Artificial Intelligence Center, which is now the Chief Digital and Artificial Intelligence Office. These initiatives aim to enhance commanders' dashboards for predicting troop behaviors and develop models to forecast troop suicidality. Additionally, it outlines 5 key AI applications within military medicine, including (1) clinical efficiency and routine decision-making support, (2) triage and clinical care algorithms for large-scale combat operations, (3) patient and resource movements in the medical common operating picture, (4) health monitoring and biosurveillance, and (5) medical product development. Even with its promising potential, AI brings forth inherent risks and limitations that require careful consideration and discussion. The article also advocates for a forward-thinking approach for the U.S. Military to effectively leverage AI in advancing military health and overall operational readiness.


Assuntos
Inteligência Artificial , Medicina Militar , Inteligência Artificial/tendências , Inteligência Artificial/normas , Humanos , Medicina Militar/métodos , Medicina Militar/tendências , Algoritmos , Estados Unidos , Militares/psicologia , Militares/estatística & dados numéricos
11.
Artigo em Alemão | MEDLINE | ID: mdl-39017712

RESUMO

Clinical decision support systems (CDSS) based on artificial intelligence (AI) are complex socio-technical innovations and are increasingly being used in medicine and nursing to improve the overall quality and efficiency of care, while also addressing limited financial and human resources. However, in addition to such intended clinical and organisational effects, far-reaching ethical, social and legal implications of AI-based CDSS on patient care and nursing are to be expected. To date, these normative-social implications have not been sufficiently investigated. The BMBF-funded project DESIREE (DEcision Support In Routine and Emergency HEalth Care: Ethical and Social Implications) has developed recommendations for the responsible design and use of clinical decision support systems. This article focuses primarily on ethical and social aspects of AI-based CDSS that could have a negative impact on patient health. Our recommendations are intended as additions to existing recommendations and are divided into the following action fields with relevance across all stakeholder groups: development, clinical use, information and consent, education and training, and (accompanying) research.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Humanos , Inteligência Artificial/ética , Inteligência Artificial/normas , Sistemas de Apoio a Decisões Clínicas/ética , Sistemas de Apoio a Decisões Clínicas/normas , Alemanha , Cuidados de Enfermagem/ética , Cuidados de Enfermagem/métodos , Cuidados de Enfermagem/normas , Guias de Prática Clínica como Assunto , Design de Software
13.
BMJ Open Qual ; 13(2)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38830730

RESUMO

BACKGROUND: Manual chart review using validated assessment tools is a standardised methodology for detecting diagnostic errors. However, this requires considerable human resources and time. ChatGPT, a recently developed artificial intelligence chatbot based on a large language model, can effectively classify text based on suitable prompts. Therefore, ChatGPT can assist manual chart reviews in detecting diagnostic errors. OBJECTIVE: This study aimed to clarify whether ChatGPT could correctly detect diagnostic errors and possible factors contributing to them based on case presentations. METHODS: We analysed 545 published case reports that included diagnostic errors. We imputed the texts of case presentations and the final diagnoses with some original prompts into ChatGPT (GPT-4) to generate responses, including the judgement of diagnostic errors and contributing factors of diagnostic errors. Factors contributing to diagnostic errors were coded according to the following three taxonomies: Diagnosis Error Evaluation and Research (DEER), Reliable Diagnosis Challenges (RDC) and Generic Diagnostic Pitfalls (GDP). The responses on the contributing factors from ChatGPT were compared with those from physicians. RESULTS: ChatGPT correctly detected diagnostic errors in 519/545 cases (95%) and coded statistically larger numbers of factors contributing to diagnostic errors per case than physicians: DEER (median 5 vs 1, p<0.001), RDC (median 4 vs 2, p<0.001) and GDP (median 4 vs 1, p<0.001). The most important contributing factors of diagnostic errors coded by ChatGPT were 'failure/delay in considering the diagnosis' (315, 57.8%) in DEER, 'atypical presentation' (365, 67.0%) in RDC, and 'atypical presentation' (264, 48.4%) in GDP. CONCLUSION: ChatGPT accurately detects diagnostic errors from case presentations. ChatGPT may be more sensitive than manual reviewing in detecting factors contributing to diagnostic errors, especially for 'atypical presentation'.


Assuntos
Erros de Diagnóstico , Humanos , Erros de Diagnóstico/estatística & dados numéricos , Inteligência Artificial/normas
19.
J Med Internet Res ; 26: e54705, 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38776538

RESUMO

BACKGROUND: In recent years, there has been an upwelling of artificial intelligence (AI) studies in the health care literature. During this period, there has been an increasing number of proposed standards to evaluate the quality of health care AI studies. OBJECTIVE: This rapid umbrella review examines the use of AI quality standards in a sample of health care AI systematic review articles published over a 36-month period. METHODS: We used a modified version of the Joanna Briggs Institute umbrella review method. Our rapid approach was informed by the practical guide by Tricco and colleagues for conducting rapid reviews. Our search was focused on the MEDLINE database supplemented with Google Scholar. The inclusion criteria were English-language systematic reviews regardless of review type, with mention of AI and health in the abstract, published during a 36-month period. For the synthesis, we summarized the AI quality standards used and issues noted in these reviews drawing on a set of published health care AI standards, harmonized the terms used, and offered guidance to improve the quality of future health care AI studies. RESULTS: We selected 33 review articles published between 2020 and 2022 in our synthesis. The reviews covered a wide range of objectives, topics, settings, designs, and results. Over 60 AI approaches across different domains were identified with varying levels of detail spanning different AI life cycle stages, making comparisons difficult. Health care AI quality standards were applied in only 39% (13/33) of the reviews and in 14% (25/178) of the original studies from the reviews examined, mostly to appraise their methodological or reporting quality. Only a handful mentioned the transparency, explainability, trustworthiness, ethics, and privacy aspects. A total of 23 AI quality standard-related issues were identified in the reviews. There was a recognized need to standardize the planning, conduct, and reporting of health care AI studies and address their broader societal, ethical, and regulatory implications. CONCLUSIONS: Despite the growing number of AI standards to assess the quality of health care AI studies, they are seldom applied in practice. With increasing desire to adopt AI in different health topics, domains, and settings, practitioners and researchers must stay abreast of and adapt to the evolving landscape of health care AI quality standards and apply these standards to improve the quality of their AI studies.


Assuntos
Inteligência Artificial , Inteligência Artificial/normas , Humanos , Atenção à Saúde/normas , Qualidade da Assistência à Saúde/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA