Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLOS Digit Health ; 3(6): e0000513, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38843115

RESUMO

Healthcare delivery organizations (HDOs) in the US must contend with the potential for AI to worsen health inequities. But there is no standard set of procedures for HDOs to adopt to navigate these challenges. There is an urgent need for HDOs to present a unified approach to proactively address the potential for AI to worsen health inequities. Amidst this background, Health AI Partnership (HAIP) launched a community of practice to convene stakeholders from across HDOs to tackle challenges related to the use of AI. On February 15, 2023, HAIP hosted an inaugural workshop focused on the question, "Our health care delivery setting is considering adopting a new solution that uses AI. How do we assess the potential future impact on health inequities?" This topic emerged as a common challenge faced by all HDOs participating in HAIP. The workshop had 2 main goals. First, we wanted to ensure participants could talk openly without reservations about challenging topics such as health equity. The second goal was to develop an actionable, generalizable framework that could be immediately put into practice. The workshop engaged 77 participants with 100% representation from all 10 HDOs and invited ecosystem partners. In an accompanying Research Article, we share the Health Equity Across the AI Lifecycle (HEAAL) framework. We invite and encourage HDOs to test the HEAAL framework internally and share feedback so that we can continue to refine and maintain the set of procedures. The HEAAL framework reveals the challenges associated with rigorously assessing the potential for AI to worsen health inequities. Significant investment in personnel, capabilities, and data infrastructure is required, and the level of investment needed could be beyond reach for most HDOs. We look forward to expanding our community of practice to assist HDOs around the world.

2.
Healthcare (Basel) ; 12(11)2024 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-38891158

RESUMO

Since their release, the medical community has been actively exploring large language models' (LLMs) capabilities, which show promise in providing accurate medical knowledge. One potential application is as a patient resource. This study analyzes and compares the ability of the currently available LLMs, ChatGPT-3.5, GPT-4, and Gemini, to provide postoperative care recommendations to plastic surgery patients. We presented each model with 32 questions addressing common patient concerns after surgical cosmetic procedures and evaluated the medical accuracy, readability, understandability, and actionability of the models' responses. The three LLMs provided equally accurate information, with GPT-3.5 averaging the highest on the Likert scale (LS) (4.18 ± 0.93) (p = 0.849), while Gemini provided significantly more readable (p = 0.001) and understandable responses (p = 0.014; p = 0.001). There was no difference in the actionability of the models' responses (p = 0.830). Although LLMs have shown their potential as adjunctive tools in postoperative patient care, further refinement and research are imperative to enable their evolution into comprehensive standalone resources.

3.
Eur J Investig Health Psychol Educ ; 14(5): 1182-1196, 2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38785576

RESUMO

With abundant information and interconnectedness among people, identifying knowledgeable individuals in specific domains has become crucial for organizations. Artificial intelligence (AI) algorithms have been employed to evaluate the knowledge and locate experts in specific areas, alleviating the manual burden of expert profiling and identification. However, there is a limited body of research exploring the application of AI algorithms for expert finding in the medical and biomedical fields. This study aims to conduct a scoping review of existing literature on utilizing AI algorithms for expert identification in medical domains. We systematically searched five platforms using a customized search string, and 21 studies were identified through other sources. The search spanned studies up to 2023, and study eligibility and selection adhered to the PRISMA 2020 statement. A total of 571 studies were assessed from the search. Out of these, we included six studies conducted between 2014 and 2020 that met our review criteria. Four studies used a machine learning algorithm as their model, while two utilized natural language processing. One study combined both approaches. All six studies demonstrated significant success in expert retrieval compared to baseline algorithms, as measured by various scoring metrics. AI enhances expert finding accuracy and effectiveness. However, more work is needed in intelligent medical expert retrieval.

4.
Eur J Investig Health Psychol Educ ; 14(5): 1413-1424, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38785591

RESUMO

In postoperative care, patient education and follow-up are pivotal for enhancing the quality of care and satisfaction. Artificial intelligence virtual assistants (AIVA) and large language models (LLMs) like Google BARD and ChatGPT-4 offer avenues for addressing patient queries using natural language processing (NLP) techniques. However, the accuracy and appropriateness of the information vary across these platforms, necessitating a comparative study to evaluate their efficacy in this domain. We conducted a study comparing AIVA (using Google Dialogflow) with ChatGPT-4 and Google BARD, assessing the accuracy, knowledge gap, and response appropriateness. AIVA demonstrated superior performance, with significantly higher accuracy (mean: 0.9) and lower knowledge gap (mean: 0.1) compared to BARD and ChatGPT-4. Additionally, AIVA's responses received higher Likert scores for appropriateness. Our findings suggest that specialized AI tools like AIVA are more effective in delivering precise and contextually relevant information for postoperative care compared to general-purpose LLMs. While ChatGPT-4 shows promise, its performance varies, particularly in verbal interactions. This underscores the importance of tailored AI solutions in healthcare, where accuracy and clarity are paramount. Our study highlights the necessity for further research and the development of customized AI solutions to address specific medical contexts and improve patient outcomes.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...