Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Añadir filtros








Intervalo de año
1.
Journal of Medicine University of Santo Tomas ; (2): 1325-1334, 2023.
Artículo en Inglés | WPRIM | ID: wpr-998867

RESUMEN

@#The emerging field of generative artificial intelligence (GAI) and some of its well-known technologies: ChatGPT, Google Bard and Claude, have gained substantial popularity due to their enormous potential in healthcare applications, as seen in medically fine-tuned models such as Med-PaLM and ChatDoctor. While these advancements are impressive, the dependence of AI development on data volume and quality raises questions about the generalizability of these models. Regions with lower medical research output risk bias and misrepresentation in AI-generated content, especially when used to assist clinical practice. Upon testing of a prompt concerning the isoniazid dosing of Filipinos versus other ethnic and racial groups, responses from GPT-4, GPT-3, Bard and Claude resulted in 3 out of 4 outputs showing convincing but false content, with extended prompting illustrating how response hallucination happens in GAI models. To address this, model refinement techniques such as fine-tuning and prompt ensembles are suggested; however, refining AI models for local contextualization requires data availability, data quality and quality assurance frameworks. Clinicians and researchers in the Philippines and other underrepresented regions are called to initiate capacity-building efforts to prepare for AI in healthcare. Early efforts from all stakeholders are needed to prevent the exacerbation of health inequities, especially in the new clinical frontiers brought about by GAI.


Asunto(s)
Inteligencia Artificial , Sesgo , Atención a la Salud , Filipinas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA