Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Diagnostics (Basel) ; 14(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39272697

RESUMEN

The integration of artificial intelligence (AI) in medical diagnostics represents a significant advancement in managing upper gastrointestinal (GI) cancer, which is a major cause of global cancer mortality. Specifically for gastric cancer (GC), chronic inflammation causes changes in the mucosa such as atrophy, intestinal metaplasia (IM), dysplasia, and ultimately cancer. Early detection through endoscopic regular surveillance is essential for better outcomes. Foundation models (FMs), which are machine or deep learning models trained on diverse data and applicable to broad use cases, offer a promising solution to enhance the accuracy of endoscopy and its subsequent pathology image analysis. This review explores the recent advancements, applications, and challenges associated with FMs in endoscopy and pathology imaging. We started by elucidating the core principles and architectures underlying these models, including their training methodologies and the pivotal role of large-scale data in developing their predictive capabilities. Moreover, this work discusses emerging trends and future research directions, emphasizing the integration of multimodal data, the development of more robust and equitable models, and the potential for real-time diagnostic support. This review aims to provide a roadmap for researchers and practitioners in navigating the complexities of incorporating FMs into clinical practice for the prevention/management of GC cases, thereby improving patient outcomes.

2.
Sci Rep ; 14(1): 14798, 2024 06 26.
Artículo en Inglés | MEDLINE | ID: mdl-38926427

RESUMEN

Muscle ultrasound has been shown to be a valid and safe imaging modality to assess muscle wasting in critically ill patients in the intensive care unit (ICU). This typically involves manual delineation to measure the rectus femoris cross-sectional area (RFCSA), which is a subjective, time-consuming, and laborious task that requires significant expertise. We aimed to develop and evaluate an AI tool that performs automated recognition and measurement of RFCSA to support non-expert operators in measurement of the RFCSA using muscle ultrasound. Twenty patients were recruited between Feb 2023 and July 2023 and were randomized sequentially to operators using AI (n = 10) or non-AI (n = 10). Muscle loss during ICU stay was similar for both methods: 26 ± 15% for AI and 23 ± 11% for the non-AI, respectively (p = 0.13). In total 59 ultrasound examinations were carried out (30 without AI and 29 with AI). When assisted by our AI tool, the operators showed less variability between measurements with higher intraclass correlation coefficients (ICCs 0.999 95% CI 0.998-0.999 vs. 0.982 95% CI 0.962-0.993) and lower Bland Altman limits of agreement (± 1.9% vs. ± 6.6%) compared to not using the AI tool. The time spent on scans reduced significantly from a median of 19.6 min (IQR 16.9-21.7) to 9.4 min (IQR 7.2-11.7) compared to when using the AI tool (p < 0.001). AI-assisted muscle ultrasound removes the need for manual tracing, increases reproducibility and saves time. This system may aid monitoring muscle size in ICU patients assisting rehabilitation programmes.


Asunto(s)
Enfermedad Crítica , Unidades de Cuidados Intensivos , Atrofia Muscular , Ultrasonografía , Humanos , Masculino , Ultrasonografía/métodos , Femenino , Persona de Mediana Edad , Anciano , Atrofia Muscular/diagnóstico por imagen , Músculo Esquelético/diagnóstico por imagen , Músculo Cuádriceps/diagnóstico por imagen , Inteligencia Artificial , Adulto
3.
Crit Care ; 27(1): 257, 2023 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-37393330

RESUMEN

BACKGROUND: Interpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in  a low resource ICU. METHODS: This was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool. RESULTS: The average accuracy of beginners' LUS interpretation was 68.7% [95% CI 66.8-70.7%] compared to 72.2% [95% CI 70.0-75.6%] in intermediate, and 73.4% [95% CI 62.2-87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2-100.0%], which was significantly better than beginners, intermediate and advanced users (p < 0.001). When supported by our AI tool for interpreting retrospectively acquired clips, the non-expert clinicians improved their performance from an average of 68.9% [95% CI 65.6-73.9%] to 82.9% [95% CI 79.1-86.7%], (p < 0.001). In prospective real-time testing, non-expert clinicians improved their baseline performance from 68.1% [95% CI 57.9-78.2%] to 93.4% [95% CI 89.0-97.8%], (p < 0.001) when using our AI tool. The time-to-interpret clips improved from a median of 12.1 s (IQR 8.5-20.6) to 5.0 s (IQR 3.5-8.8), (p < 0.001) and clinicians' median confidence level improved from 3 out of 4 to 4 out of 4 when using our AI tool. CONCLUSIONS: AI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently.


Asunto(s)
Inteligencia Artificial , Unidades de Cuidados Intensivos , Humanos , Estudios Prospectivos , Estudios Retrospectivos , Ultrasonografía
4.
IEEE Trans Haptics ; 9(3): 376-86, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27101615

RESUMEN

Sensory augmentation operates by synthesizing new information then displaying it through an existing sensory channel and can be used to help people with impaired sensing or to assist in tasks where sensory information is limited or sparse, for example, when navigating in a low visibility environment. This paper presents the design of a 2nd generation head-mounted vibrotactile interface as a sensory augmentation prototype designed to present navigation commands that are intuitive, informative, and minimize information overload. We describe an experiment in a structured environment in which the user navigates along a virtual wall whilst the position and orientation of the user's head is tracked in real time by a motion capture system. Navigation commands in the form of vibrotactile feedback are presented according to the user's distance from the virtual wall and their head orientation. We test the four possible combinations of two command presentation modes (continuous, discrete) and two command types (recurring, single). We evaluated the effectiveness of this 'tactile language' according to the users' walking speed and the smoothness of their trajectory parallel to the virtual wall. Results showed that recurring continuous commands allowed users to navigate with lowest route deviation and highest walking speed. In addition, subjects preferred recurring continuous commands over other commands.


Asunto(s)
Patrones de Reconocimiento Fisiológico/fisiología , Auxiliares Sensoriales , Tacto/fisiología , Presentación de Datos , Retroalimentación , Humanos , Lenguaje , Interfaz Usuario-Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA