Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 14798, 2024 06 26.
Article in English | MEDLINE | ID: mdl-38926427

ABSTRACT

Muscle ultrasound has been shown to be a valid and safe imaging modality to assess muscle wasting in critically ill patients in the intensive care unit (ICU). This typically involves manual delineation to measure the rectus femoris cross-sectional area (RFCSA), which is a subjective, time-consuming, and laborious task that requires significant expertise. We aimed to develop and evaluate an AI tool that performs automated recognition and measurement of RFCSA to support non-expert operators in measurement of the RFCSA using muscle ultrasound. Twenty patients were recruited between Feb 2023 and July 2023 and were randomized sequentially to operators using AI (n = 10) or non-AI (n = 10). Muscle loss during ICU stay was similar for both methods: 26 ± 15% for AI and 23 ± 11% for the non-AI, respectively (p = 0.13). In total 59 ultrasound examinations were carried out (30 without AI and 29 with AI). When assisted by our AI tool, the operators showed less variability between measurements with higher intraclass correlation coefficients (ICCs 0.999 95% CI 0.998-0.999 vs. 0.982 95% CI 0.962-0.993) and lower Bland Altman limits of agreement (± 1.9% vs. ± 6.6%) compared to not using the AI tool. The time spent on scans reduced significantly from a median of 19.6 min (IQR 16.9-21.7) to 9.4 min (IQR 7.2-11.7) compared to when using the AI tool (p < 0.001). AI-assisted muscle ultrasound removes the need for manual tracing, increases reproducibility and saves time. This system may aid monitoring muscle size in ICU patients assisting rehabilitation programmes.


Subject(s)
Critical Illness , Intensive Care Units , Muscular Atrophy , Ultrasonography , Humans , Male , Ultrasonography/methods , Female , Middle Aged , Aged , Muscular Atrophy/diagnostic imaging , Muscle, Skeletal/diagnostic imaging , Quadriceps Muscle/diagnostic imaging , Artificial Intelligence , Adult
2.
Crit Care ; 27(1): 257, 2023 07 01.
Article in English | MEDLINE | ID: mdl-37393330

ABSTRACT

BACKGROUND: Interpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in  a low resource ICU. METHODS: This was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool. RESULTS: The average accuracy of beginners' LUS interpretation was 68.7% [95% CI 66.8-70.7%] compared to 72.2% [95% CI 70.0-75.6%] in intermediate, and 73.4% [95% CI 62.2-87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2-100.0%], which was significantly better than beginners, intermediate and advanced users (p < 0.001). When supported by our AI tool for interpreting retrospectively acquired clips, the non-expert clinicians improved their performance from an average of 68.9% [95% CI 65.6-73.9%] to 82.9% [95% CI 79.1-86.7%], (p < 0.001). In prospective real-time testing, non-expert clinicians improved their baseline performance from 68.1% [95% CI 57.9-78.2%] to 93.4% [95% CI 89.0-97.8%], (p < 0.001) when using our AI tool. The time-to-interpret clips improved from a median of 12.1 s (IQR 8.5-20.6) to 5.0 s (IQR 3.5-8.8), (p < 0.001) and clinicians' median confidence level improved from 3 out of 4 to 4 out of 4 when using our AI tool. CONCLUSIONS: AI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently.


Subject(s)
Artificial Intelligence , Intensive Care Units , Humans , Prospective Studies , Retrospective Studies , Ultrasonography
3.
IEEE Trans Haptics ; 9(3): 376-86, 2016.
Article in English | MEDLINE | ID: mdl-27101615

ABSTRACT

Sensory augmentation operates by synthesizing new information then displaying it through an existing sensory channel and can be used to help people with impaired sensing or to assist in tasks where sensory information is limited or sparse, for example, when navigating in a low visibility environment. This paper presents the design of a 2nd generation head-mounted vibrotactile interface as a sensory augmentation prototype designed to present navigation commands that are intuitive, informative, and minimize information overload. We describe an experiment in a structured environment in which the user navigates along a virtual wall whilst the position and orientation of the user's head is tracked in real time by a motion capture system. Navigation commands in the form of vibrotactile feedback are presented according to the user's distance from the virtual wall and their head orientation. We test the four possible combinations of two command presentation modes (continuous, discrete) and two command types (recurring, single). We evaluated the effectiveness of this 'tactile language' according to the users' walking speed and the smoothness of their trajectory parallel to the virtual wall. Results showed that recurring continuous commands allowed users to navigate with lowest route deviation and highest walking speed. In addition, subjects preferred recurring continuous commands over other commands.


Subject(s)
Pattern Recognition, Physiological/physiology , Sensory Aids , Touch/physiology , Data Display , Feedback , Humans , Language , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...