Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ultrasonics ; 134: 107096, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37392616

RESUMO

B-mode images undergo degradation in the boundary region because of the limited number of elements in the ultrasound probe. Herein, a deep learning-based extended aperture image reconstruction method is proposed to reconstruct a B-mode image with an enhanced boundary region. The proposed network can reconstruct an image using pre-beamformed raw data received from the half-aperture of the probe. To generate a high-quality training target without degradation in the boundary region, the target data were acquired using the full-aperture. Training data were acquired from an experimental study using a tissue-mimicking phantom, vascular phantom, and simulation of random point scatterers. Compared with plane-wave images from delay and sum beamforming, the proposed extended aperture image reconstruction method achieves improvement at the boundary region in terms of the multi-scale structure of similarity and peak signal-to-noise ratio by 8% and 4.10 dB in resolution evaluation phantom, 7% and 3.15 dB in contrast speckle phantom, and 5% and 3 dB in in vivo study of carotid artery imaging. The findings in this study prove the feasibility of a deep learning-based extended aperture image reconstruction method for boundary region improvement.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Ultrassonografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Imagens de Fantasmas , Razão Sinal-Ruído , Simulação por Computador
2.
Phys Med Biol ; 68(7)2023 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-36881926

RESUMO

Objective.Vascular wall motion can be used to diagnose cardiovascular diseases. In this study, long short-term memory (LSTM) neural networks were used to track vascular wall motion in plane-wave-based ultrasound imaging.Approach.The proposed LSTM and convolutional LSTM (ConvLSTM) models were trained using ultrasound data from simulations and tested experimentally using a tissue-mimicking vascular phantom and anin vivostudy using a carotid artery. The performance of the models in the simulation was evaluated using the mean square error from axial and lateral motions and compared with the cross-correlation (XCorr) method. Statistical analysis was performed using the Bland-Altman plot, Pearson correlation coefficient, and linear regression in comparison with the manually annotated ground truth.Main results.For thein vivodata, the median error and 95% limit of agreement from the Bland-Altman analysis were (0.01, 0.13), (0.02, 0.19), and (0.03, 0.18), the Pearson correlation coefficients were 0.97, 0.94, and 0.94, respectively, and the linear equations were 0.89x+ 0.02, 0.84x+ 0.03, and 0.88x+ 0.03 from linear regression for the ConvLSTM model, LSTM model, and XCorr method, respectively. In the longitudinal and transverse views of the carotid artery, the LSTM-based models outperformed the XCorr method. Overall, the ConvLSTM model was superior to the LSTM model and XCorr method.Significance.This study demonstrated that vascular wall motion can be tracked accurately and precisely using plane-wave-based ultrasound imaging and the proposed LSTM-based models.


Assuntos
Memória de Curto Prazo , Tomografia Computadorizada por Raios X , Ultrassonografia , Movimento (Física) , Redes Neurais de Computação
3.
Cancers (Basel) ; 14(20)2022 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-36291895

RESUMO

Endoscopic ultrasonography (EUS) plays an important role in diagnosing pancreatic cancer. Surgical therapy is critical to pancreatic cancer survival and can be planned properly, with the characteristics of the target cancer determined. The physical characteristics of the pancreatic cancer, such as size, location, and shape, can be determined by semantic segmentation of EUS images. This study proposes a deep learning approach for the segmentation of pancreatic cancer in EUS images. EUS images were acquired from 150 patients diagnosed with pancreatic cancer. A network with deep attention features (DAF-Net) is proposed for pancreatic cancer segmentation using EUS images. The performance of the deep learning models (U-Net, Attention U-Net, and DAF-Net) was evaluated by 5-fold cross-validation. For the evaluation metrics, the Dice similarity coefficient (DSC), intersection over union (IoU), receiver operating characteristic (ROC) curve, and area under the curve (AUC) were chosen. Statistical analysis was performed for different stages and locations of the cancer. DAF-Net demonstrated superior segmentation performance for the DSC, IoU, AUC, sensitivity, specificity, and precision with scores of 82.8%, 72.3%, 92.7%, 89.0%, 98.1%, and 85.1%, respectively. The proposed deep learning approach can provide accurate segmentation of pancreatic cancer in EUS images and can effectively assist in the planning of surgical therapies.

4.
Comput Med Imaging Graph ; 98: 102073, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35561639

RESUMO

An image reconstruction method that can simultaneously provide high image quality and frame rate is necessary for diagnosis on cardiovascular imaging but is challenging for plane-wave ultrasound imaging. To overcome this challenge, an end-to-end ultrasound image reconstruction method is proposed for reconstructing a high-resolution B-mode image from radio frequency (RF) data. A modified U-Net architecture that adopts EfficientNet-B5 and U-Net as the encoder and decoder parts, respectively, is proposed as a deep learning beamformer. The training data comprise pairs of pre-beamformed RF data generated from random scatterers with random amplitudes and corresponding high-resolution target data generated from coherent plane-wave compounding (CPWC). To evaluate the performance of the proposed beamforming model, simulation and experimental data are used for various beamformers, such as delay-and-sum (DAS), CPWC, and other deep learning beamformers, including U-Net and EfficientNet-B0. Compared with single plane-wave imaging with DAS, the proposed beamforming model reduces the lateral full width at half maximum by 35% for simulation and 29.6% for experimental data and improves the contrast-to-noise ratio and peak signal-to-noise ratio, respectively, by 6.3 and 9.97 dB for simulation, 2.38 and 3.01 dB for experimental data, and 3.18 and 1.03 dB for in vivo data. Furthermore, the computational complexity of the proposed beamforming model is four times less than that of the U-Net beamformer. The study results demonstrate that the proposed ultrasound image reconstruction method employing a deep learning beamformer, trained by the RF data from scatterers, can reconstruct a high-resolution image with a high frame rate for single plane-wave ultrasound imaging.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Simulação por Computador , Imagens de Fantasmas , Razão Sinal-Ruído , Ultrassonografia/métodos
5.
Diagnostics (Basel) ; 11(6)2021 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-34201066

RESUMO

Mucinous cystic neoplasms (MCN) and serous cystic neoplasms (SCN) account for a large portion of solitary pancreatic cystic neoplasms (PCN). In this study we implemented a convolutional neural network (CNN) model using ResNet50 to differentiate between MCN and SCN. The training data were collected retrospectively from 59 MCN and 49 SCN patients from two different hospitals. Data augmentation was used to enhance the size and quality of training datasets. Fine-tuning training approaches were utilized by adopting the pre-trained model from transfer learning while training selected layers. Testing of the network was conducted by varying the endoscopic ultrasonography (EUS) image sizes and positions to evaluate the network performance for differentiation. The proposed network model achieved up to 82.75% accuracy and a 0.88 (95% CI: 0.817-0.930) area under curve (AUC) score. The performance of the implemented deep learning networks in decision-making using only EUS images is comparable to that of traditional manual decision-making using EUS images along with supporting clinical information. Gradient-weighted class activation mapping (Grad-CAM) confirmed that the network model learned the features from the cyst region accurately. This study proves the feasibility of diagnosing MCN and SCN using a deep learning network model. Further improvement using more datasets is needed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...