Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38863654

RESUMO

Tracheal intubation is a crucial procedure performed in airway management to sustain life during various procedures. However, difficult airways can make intubation challenging, which is associated with increased mortality and morbidity. This is particularly important for children who undergo intubation where the situation is difficult. Improved airway management will decrease incidences of repeated attempts, decrease hypoxic injuries in patients, and decrease hospital stays, resulting in better clinical outcomes and reduced costs. Currently, 3D printed models based on CT scans and ultrasound-guided intubation are being used or tested for device fitting and procedure guidance to increase the success rate of intubation, but both have limitations. Maintaining a 3D printing facility can be logistically inconvenient, and it can be time consuming and expensive. Ultrasound-guided intubation can be hindered by operator dependence, limited two-dimensional visualization, and potential artifacts. In this study, we developed an augmented reality (AR) system that allows the overlay of intubation tools and internal airways, providing real-time guidance during the procedure. A child manikin was used to develop and test the AR system. Three-dimensional CT images were acquired from the manikin. Different tissues were segmented to generate the 3D models that were imported into Unity to build the holograms. Phantom experiments demonstrated the AR-guided system for potential applications in tracheal intubation guidance.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38827465

RESUMO

The newly released Segment Anything Model (SAM) is a popular tool used in image processing due to its superior segmentation accuracy, variety of input prompts, training capabilities, and efficient model design. However, its current model is trained on a diverse dataset not tailored to medical images, particularly ultrasound images. Ultrasound images tend to have a lot of noise, making it difficult to segment out important structures. In this project, we developed ClickSAM, which fine-tunes the Segment Anything Model using click prompts for ultrasound images. ClickSAM has two stages of training: the first stage is trained on single-click prompts centered in the ground-truth contours, and the second stage focuses on improving the model performance through additional positive and negative click prompts. By comparing the first stage's predictions to the ground-truth masks, true positive, false positive, and false negative segments are calculated. Positive clicks are generated using the true positive and false negative segments, and negative clicks are generated using the false positive segments. The Centroidal Voronoi Tessellation algorithm is then employed to collect positive and negative click prompts in each segment that are used to enhance the model performance during the second stage of training. With click-train methods, ClickSAM exhibits superior performance compared to other existing models for ultrasound image segmentation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...