Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
1.
J Med Imaging (Bellingham) ; 11(4): 044502, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38988991

RESUMO

Purpose: Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs. Approach: In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing. Results: The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity. Conclusions: In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.

2.
Med Phys ; 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38857570

RESUMO

BACKGROUND: Three-dimensional (3D) ultrasound (US) imaging has shown promise in non-invasive monitoring of changes in the lateral brain ventricles of neonates suffering from intraventricular hemorrhaging. Due to the poorly defined anatomical boundaries and low signal-to-noise ratio, fully supervised methods for segmentation of the lateral ventricles in 3D US images require a large dataset of annotated images by trained physicians, which is tedious, time-consuming, and expensive. Training fully supervised segmentation methods on a small dataset may lead to overfitting and hence reduce its generalizability. Semi-supervised learning (SSL) methods for 3D US segmentation may be able to address these challenges but most existing SSL methods have been developed for magnetic resonance or computed tomography (CT) images. PURPOSE: To develop a fast, lightweight, and accurate SSL method, specifically for 3D US images, that will use unlabeled data towards improving segmentation performance. METHODS: We propose an SSL framework that leverages the shape-encoding ability of an autoencoder network to enforce complex shape and size constraints on a 3D U-Net segmentation model. The autoencoder created pseudo-labels, based on the 3D U-Net predicted segmentations, that enforces shape constraints. An adversarial discriminator network then determined whether images came from the labeled or unlabeled data distributions. We used 887 3D US images, of which 87 had manually annotated labels and 800 images were unlabeled. Training/validation/testing sets of 25/12/50, 25/12/25 and 50/12/25 images were used for model experimentation. The Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and absolute volumetric difference (VD) were used as metrics for comparing to other benchmarks. The baseline benchmark was the fully supervised vanilla 3D U-Net while dual task consistency, shape-aware semi-supervised network, correlation-aware mutual learning, and 3D U-Net Ensemble models were used as state-of-the-art benchmarks with DSC, MAD, and VD as comparison metrics. The Wilcoxon signed-rank test was used to test statistical significance between algorithms for DSC and VD with the threshold being p < 0.05 and corrected to p < 0.01 using the Bonferroni correction. The random-access memory (RAM) trace and number of trainable parameters were used to compare the computing efficiency between models. RESULTS: Relative to the baseline 3D U-Net model, our shape-encoding SSL method reported a mean DSC improvement of 6.5%, 7.7%, and 4.1% with a 95% confidence interval of 4.2%, 5.7%, and 2.1% using image data splits of 25/12/50, 25/12/25, and 50/12/25, respectively. Our method only used a 1GB increase in RAM compared to the baseline 3D U-Net and required less than half the RAM and trainable parameters compared to the 3D U-Net ensemble method. CONCLUSIONS: Based on our extensive literature survey, this is one of the first reported works to propose an SSL method designed for segmenting organs in 3D US images and specifically one that incorporates unlabeled data for segmenting neonatal cerebral lateral ventricles. When compared to the state-of-the-art SSL and fully supervised learning methods, our method yielded the highest DSC and lowest VD while being computationally efficient.

3.
J Med Imaging (Bellingham) ; 11(3): 037501, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38737492

RESUMO

Purpose: Semantic segmentation in high-resolution, histopathology whole slide images (WSIs) is an important fundamental task in various pathology applications. Convolutional neural networks (CNN) are the state-of-the-art approach for image segmentation. A patch-based CNN approach is often employed because of the large size of WSIs; however, segmentation performance is sensitive to the field-of-view and resolution of the input patches, and balancing the trade-offs is challenging when there are drastic size variations in the segmented structures. We propose a multiresolution semantic segmentation approach, which is capable of addressing the threefold trade-off between field-of-view, computational efficiency, and spatial resolution in histopathology WSIs. Approach: We propose a two-stage multiresolution approach for semantic segmentation of histopathology WSIs of mouse lung tissue and human placenta. In the first stage, we use four different CNNs to extract the contextual information from input patches at four different resolutions. In the second stage, we use another CNN to aggregate the extracted information in the first stage and generate the final segmentation masks. Results: The proposed method reported 95.6%, 92.5%, and 97.1% in our single-class placenta dataset and 97.1%, 87.3%, and 83.3% in our multiclass lung dataset for pixel-wise accuracy, mean Dice similarity coefficient, and mean positive predictive value, respectively. Conclusions: The proposed multiresolution approach demonstrated high accuracy and consistency in the semantic segmentation of biological structures of different sizes in our single-class placenta and multiclass lung histopathology WSI datasets. Our study can potentially be used in automated analysis of biological structures, facilitating the clinical research in histopathology applications.

4.
J Med Imaging (Bellingham) ; 11(2): 026001, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38435711

RESUMO

Purpose: Diagnostic performance of prostate MRI depends on high-quality imaging. Prostate MRI quality is inversely proportional to the amount of rectal gas and distention. Early detection of poor-quality MRI may enable intervention to remove gas or exam rescheduling, saving time. We developed a machine learning based quality prediction of yet-to-be acquired MRI images solely based on MRI rapid localizer sequence, which can be acquired in a few seconds. Approach: The dataset consists of 213 (147 for training and 64 for testing) prostate sagittal T2-weighted (T2W) MRI localizer images and rectal content, manually labeled by an expert radiologist. Each MRI localizer contains seven two-dimensional (2D) slices of the patient, accompanied by manual segmentations of rectum for each slice. Cascaded and end-to-end deep learning models were used to predict the quality of yet-to-be T2W, DWI, and apparent diffusion coefficient (ADC) MRI images. Predictions were compared to quality scores determined by the experts using area under the receiver operator characteristic curve and intra-class correlation coefficient. Results: In the test set of 64 patients, optimal versus suboptimal exams occurred in 95.3% (61/64) versus 4.7% (3/64) for T2W, 90.6% (58/64) versus 9.4% (6/64) for DWI, and 89.1% (57/64) versus 10.9% (7/64) for ADC. The best performing segmentation model was 2D U-Net with ResNet-34 encoder and ImageNet weights. The best performing classifier was the radiomics based classifier. Conclusions: A radiomics based classifier applied to localizer images achieves accurate diagnosis of subsequent image quality for T2W, DWI, and ADC prostate MRI sequences.

5.
Placenta ; 145: 19-26, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38011757

RESUMO

INTRODUCTION: Hypertensive disorders of pregnancy (HDP) and fetal growth restriction (FGR) are common obstetrical complications, often with pathological features of maternal vascular malperfusion (MVM) in the placenta. Currently, clinical placental pathology methods involve a manual visual examination of histology sections, a practice that can be resource-intensive and demonstrates moderate-to-poor inter-pathologist agreement on diagnostic outcomes, dependant on the degree of pathologist sub-specialty training. METHODS: This study aims to apply machine learning (ML) feature extraction methods to classify digital images of placental histopathology specimens, collected from cases of HDP [pregnancy induced hypertension (PIH), preeclampsia (PE), PE + FGR], normotensive FGR, and healthy pregnancies, according to the presence or absence of MVM lesions. 159 digital images were captured from histological placental specimens, manually scored for MVM lesions (MVM- or MVM+) and used to develop a support vector machine (SVM) classifier model, using features extracted from pre-trained ResNet18. The model was trained with data augmentation and shuffling, with the performance assessed for patch-level and image-level classification through measurements of accuracy, precision, and recall using confusion matrices. RESULTS: The SVM model demonstrated accuracies of 70 % and 79 % for patch-level and image-level MVM classification, respectively, with poorest performance observed on images with borderline MVM presence, as determined through post hoc observation. DISCUSSION: The results are promising for the integration of ML methods into the placental histopathological examination process. Using this study as a proof-of-concept will lead our group and others to carry ML models further in placental histopathology.


Assuntos
Hipertensão Induzida pela Gravidez , Pré-Eclâmpsia , Gravidez , Feminino , Humanos , Placenta/patologia , Resultado da Gravidez , Estudos Retrospectivos , Pré-Eclâmpsia/patologia , Hipertensão Induzida pela Gravidez/patologia , Retardo do Crescimento Fetal/diagnóstico por imagem , Retardo do Crescimento Fetal/patologia
6.
J Med Imaging (Bellingham) ; 10(3): 034505, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37284231

RESUMO

Purpose: Non-alcoholic fatty liver disease (NAFLD) is an increasing global health concern, with a prevalence of 25% worldwide. The rising incidence of NAFLD, an asymptomatic condition, reinforces the need for systematic screening strategies in primary care. We present the use of non-expert acquired point-of-care ultrasound (POCUS) B-mode images for the development of an automated steatosis classification algorithm. Approach: We obtained a Health Insurance Portability and Accountability Act compliant dataset consisting of 478 patients [body mass index 23.60±3.55, age 40.97±10.61], imaged with POCUS by non-expert health care personnel. A U-Net deep learning (DL) model was used for liver segmentation in the POCUS B-mode images, followed by 224×224 patch extraction of liver parenchyma. Several DL models including VGG-16, ResNet-50, Inception V3, and DenseNet-121 were trained for binary classification of steatosis. All layers of each tested model were unfrozen, and the final layer was replaced with a custom classifier. Majority voting was applied for patient-level results. Results: On a hold-out test set of 81 patients, the final DenseNet-121 model yielded an area under the receiver operator characteristic curve of 90.1%, sensitivity of 95.0%, and specificity of 85.2% for the detection of liver steatosis. Average cross-validation performance in models using patches of liver parenchyma as input outperformed methods using complete B-mode frames. Conclusions: Despite minimal POCUS acquisition training, and low-quality B-mode images, it is possible to detect steatosis using DL algorithms. Implementation of this algorithm in POCUS software may offer an accessible, low-cost steatosis screening technology, for use by non-expert health care personnel.

7.
Am J Vet Res ; 84(7)2023 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-37253451

RESUMO

OBJECTIVES: To determine the feasibility of machine learning algorithms for the classification of appropriate collimation of the cranial and caudal borders in ventrodorsal and dorsoventral thoracic radiographs. SAMPLES: 900 ventrodorsal and dorsoventral canine and feline thoracic radiographs were retrospectively acquired from the Picture Archiving and Communication system (PACs) system of the Ontario Veterinary College. PROCEDURES: Radiographs acquired from April 2020 to May 2021 were labeled by 1 radiologist in Summer of 2022 as either appropriately or inappropriately collimated for the cranial and caudal borders. A machine learning model was trained to identify the appropriate inclusion of the entire lung field at both the cranial and caudal borders. Both individual models and a combined overall inclusion model were assessed based on the combined results of both the cranial and caudal border assessments. RESULTS: The combined overall inclusion model showed a precision of 91.21% (95% CI [91, 91.4]), accuracy of 83.17% (95% CI [83, 83.4]), and F1 score of 87% (95% CI [86.8, 87.2]) for classification when compared with the radiologist's quality assessment. The model took on average 6 ± 1 second to run. CLINICAL RELEVANCE: Deep learning-based methods can classify small animal thoracic radiographs as appropriately or inappropriately collimated. These methods could be deployed in a clinical setting to improve the diagnostic quality of thoracic radiographs in small animal practice.


Assuntos
Doenças do Gato , Doenças do Cão , Gatos , Animais , Cães , Doenças do Gato/diagnóstico por imagem , Estudos Retrospectivos , Doenças do Cão/diagnóstico por imagem , Radiografia , Radiografia Torácica/veterinária , Aprendizado de Máquina
8.
Med Phys ; 50(10): 6215-6227, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36964964

RESUMO

BACKGROUND: Transperineal ultrasound (TPUS) is a valuable imaging tool for evaluating patients with pelvic floor disorders, including pelvic organ prolapse (POP). Currently, measurements of anatomical structures in the mid-sagittal plane of 2D and 3D US volumes are obtained manually, which is time-consuming, has high intra-rater variability, and requires an expert in pelvic floor US interpretation. Manual segmentation and biometric measurement can take 15 min per 2D mid-sagittal image by an expert operator. An automated segmentation method would provide quantitative data relevant to pelvic floor disorders and improve the efficiency and reproducibility of segmentation-based biometric methods. PURPOSE: Develop a fast, reproducible, and automated method of acquiring biometric measurements and organ segmentations from the mid-sagittal plane of female 3D TPUS volumes. METHODS: Our method used a nnU-Net segmentation model to segment the pubis symphysis, urethra, bladder, rectum, rectal ampulla, and anorectal angle in the mid-sagittal plane of female 3D TPUS volumes. We developed an algorithm to extract relevant biometrics from the segmentations. Our dataset included 248 3D TPUS volumes, 126/122 rest/Valsalva split, from 135 patients. System performance was assessed by comparing the automated results with manual ground truth data using the Dice similarity coefficient (DSC) and average absolute difference (AD). Intra-class correlation coefficient (ICC) and time difference were used to compare reproducibility and efficiency between manual and automated methods respectively. High ICC, low AD and reduction in time indicated an accurate and reliable automated system, making TPUS an efficient alternative for POP assessment. Paired t-test and non-parametric Wilcoxon signed-rank test were conducted, with p < 0.05 determining significance. RESULTS: The nnU-Net segmentation model reported average DSC and p values (in brackets), compared to the next best tested model, of 87.4% (<0.0001), 68.5% (<0.0001), 61.0% (0.1), 54.6% (0.04), 49.2% (<0.0001) and 33.7% (0.02) for bladder, rectum, urethra, pubic symphysis, anorectal angle, and rectal ampulla respectively. The average ADs for the bladder neck position, bladder descent, rectal ampulla descent and retrovesical angle were 3.2 mm, 4.5 mm, 5.3 mm and 27.3°, respectively. The biometric algorithm had an ICC > 0.80 for the bladder neck position, bladder descent and rectal ampulla descent when compared to manual measurements, indicating high reproducibility. The proposed algorithms required approximately 1.27 s to analyze one image. The manual ground truths were performed by a single expert operator. In addition, due to high operator dependency for TPUS image collection, we would need to pursue further studies with images collected from multiple operators. CONCLUSIONS: Based on our search in scientific databases (i.e., Web of Science, IEEE Xplore Digital Library, Elsevier ScienceDirect and PubMed), this is the first reported work of an automated segmentation and biometric measurement system for the mid-sagittal plane of 3D TPUS volumes. The proposed algorithm pipeline can improve the efficiency (1.27 s compared to 15 min manually) and has high reproducibility (high ICC values) compared to manual TPUS analysis for pelvic floor disorder diagnosis. Further studies are needed to verify this system's viability using multiple TPUS operators and multiple experts for performing manual segmentation and extracting biometrics from the images.


Assuntos
Distúrbios do Assoalho Pélvico , Diafragma da Pelve , Humanos , Feminino , Diafragma da Pelve/diagnóstico por imagem , Imageamento Tridimensional/métodos , Reprodutibilidade dos Testes , Algoritmos , Ultrassonografia/métodos
9.
PLoS One ; 17(7): e0269592, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35802680

RESUMO

BACKGROUND: Atrial fibrillation (AF) is associated with extracellular matrix (ECM) remodelling and often coexists with myocardial fibrosis (MF); however, the causality of these conditions is not well established. OBJECTIVE: We aim to corroborate AF to MF causality by quantifying left atrial (LA) fibrosis in cardiac magnetic resonance (CMR) images after persistent rapid ventricular pacing and subsequent AF using a canine model and histopathological validation. METHODS: Twelve canines (9 experimental, 3 control) underwent baseline 3D LGE-CMR imaging at 3T followed by insertion of a pacing device and 5 weeks of rapid ventricular pacing to induce AF (experimental) or no pacing (control). Following the 5 weeks, pacing devices were removed to permit CMR imaging followed by excision of the hearts and histopathological imaging. LA myocardial segmentation was performed manually at baseline and post-pacing to permit volumetric %MF quantification using the image intensity ratio (IIR) technique, wherein fibrosis was defined as pixels > mean LA myocardium intensity + 2SD. RESULTS: Volumetric %MF increased by an average of 2.11 ± 0.88% post-pacing in 7 of 9 experimental dogs. While there was a significant difference between paired %MF measurements from baseline to post-pacing in experimental dogs (P = 0.019), there was no significant change in control dogs (P = 0.019 and P = 0.5, Wilcoxon signed rank tests). The median %MF for paced animals was significantly greater than that of non-paced dogs at the 5-week post-insertion time point (P = 0.009, Mann Whitney U test). Histopathological imaging yielded an average %MF of 19.42 ± 4.80% (mean ± SD) for paced dogs compared to 1.85% in one control dog. CONCLUSION: Persistent rapid ventricular pacing and subsequent AF leads to an increase in LA fibrosis volumes measured by the IIR technique; however, quantification is limited by inherent image acquisition parameters and observer variability.


Assuntos
Fibrilação Atrial , Cardiomiopatias , Animais , Fibrilação Atrial/diagnóstico por imagem , Fibrilação Atrial/patologia , Fibrilação Atrial/terapia , Cardiomiopatias/patologia , Meios de Contraste , Cães , Fibrose , Gadolínio , Átrios do Coração , Imageamento por Ressonância Magnética/métodos
10.
J Med Imaging (Bellingham) ; 9(3): 036001, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35721309

RESUMO

Purpose: Multiparametric magnetic resonance imaging (mp-MRI) is being investigated for kidney cancer because of better soft tissue contrast ability. The necessity of manual labels makes the development of supervised kidney segmentation algorithms challenging for each mp-MRI protocol. Here, we developed a transfer learning-based approach to improve kidney segmentation on a small dataset of five other mp-MRI sequences. Approach: We proposed a fully automated two-dimensional (2D) attention U-Net model for kidney segmentation on T1 weighted-nephrographic phase contrast enhanced (CE)-MRI (T1W-NG) dataset ( N = 108 ). The pretrained weights of T1W-NG kidney segmentation model transferred to five other distinct mp-MRI sequences model (T2W, T1W-in-phase (T1W-IP), T1W-out-of-phase (T1W-OP), T1W precontrast (T1W-PRE), and T1W-corticomedullary-CE (T1W-CM), N = 50 ) and fine-tuned by unfreezing the layers. The individual model performances were evaluated with and without transfer-learning fivefold cross-validation on average Dice similarity coefficient (DSC), absolute volume difference, Hausdorff distance (HD), and center-of-mass distance (CD) between algorithm generated and manually segmented kidneys. Results: The developed 2D attention U-Net model for T1W-NG produced kidney segmentation DSC of 89.34 ± 5.31 % . Compared with randomly initialized weight models, the transfer learning-based models of five mp-MRI sequences showed average increase of 2.96% in DSC of kidney segmentation ( p = 0.001 to 0.006). Specifically, the transfer-learning approach increased average DSC on T2W from 87.19% to 89.90%, T1W-IP from 83.64% to 85.42%, T1W-OP from 79.35% to 83.66%, T1W-PRE from 82.05% to 85.94%, and T1W-CM from 85.65% to 87.64%. Conclusions: We demonstrate that a pretrained model for automated kidney segmentation of one mp-MRI sequence improved automated kidney segmentation on five other additional sequences.

11.
IEEE J Biomed Health Inform ; 26(6): 2582-2593, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35077377

RESUMO

While three-dimensional (3D) late gadolinium-enhanced (LGE) magnetic resonance (MR) imaging provides good conspicuity of small myocardial lesions with short acquisition time, it poses a challenge for image analysis as a large number of axial images are required to be segmented. We developed a fully automatic convolutional neural network (CNN) called cascaded triplanar autoencoder M-Net (CTAEM-Net) to segment myocardial scar from 3D LGE MRI. Two sub-networks were cascaded to segment the left ventricle (LV) myocardium and then the scar within the pre-segmented LV myocardium. Each sub-network contains three autoencoder M-Nets (AEM-Nets) segmenting the axial, sagittal and coronal slices of the 3D LGE MR image, with the final segmentation determined by voting. The AEM-Net integrates three features: (1) multi-scale inputs, (2) deep supervision and (3) multi-tasking. The multi-scale inputs allow consideration of the global and local features in segmentation. Deep supervision provides direct supervision to deeper layers and facilitates CNN convergence. Multi-task learning reduces segmentation overfitting by acquiring additional information from autoencoder reconstruction, a task closely related to segmentation. The framework provides an accuracy of 86.43% and 90.18% for LV myocardium and scar segmentation, respectively, which are the highest among existing methods to our knowledge. The time required for CTAEM-Net to segment LV myocardium and the scar was 49.72 ± 9.69s and 120.25 ± 23.18s per MR volume, respectively. The accuracy and efficiency afforded by CTAEM-Net will make possible future large population studies. The generalizability of the framework was also demonstrated by its competitive performance in two publicly available datasets of different imaging modalities.


Assuntos
Gadolínio , Ventrículos do Coração , Cicatriz/diagnóstico por imagem , Cicatriz/patologia , Ventrículos do Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Miocárdio/patologia
12.
Med Phys ; 49(2): 1034-1046, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34958147

RESUMO

BACKGROUND: Intraventricular hemorrhaging (IVH) within cerebral lateral ventricles affects 20-30% of very low birth weight infants (<1500 g). As the ventricles increase in size, the intracranial pressure increases, leading to post-hemorrhagic ventricle dilatation (PHVD), an abnormal enlargement of the head. The most widely used imaging tool for measuring IVH and PHVD is cranial two-dimensional (2D) ultrasound (US). Estimating volumetric changes over time with 2D US is unreliable due to high user variability when locating the same anatomical location at different scanning sessions. Compared to 2D US, three-dimensional (3D) US is more sensitive to volumetric changes in the ventricles and does not suffer from variability in slice acquisition. However, 3D US images require segmentation of the ventricular surface, which is tedious and time-consuming when done manually. PURPOSE: A fast, automated ventricle segmentation method for 3D US would provide quantitative information in a timely manner when monitoring IVH and PHVD in pre-term neonates. To this end, we developed a fast and fully automated segmentation method to segment neonatal cerebral lateral ventricles from 3D US images using deep learning. METHODS: Our method consists of a 3D U-Net ensemble model composed of three U-Net variants, each highlighting various aspects of the segmentation task such as the shape and boundary of the ventricles. The ensemble is made of a U-Net++, attention U-Net, and U-Net with a deep learning-based shape prior combined using a mean voting strategy. We used a dataset consisting of 190 3D US images, which was separated into two subsets, one set of 87 images contained both ventricles, and one set of 103 images contained only one ventricle (caused by limited field-of-view during acquisition). We conducted fivefold cross-validation to evaluate the performance of the models on a larger amount of test data; 165 test images of which 75 have two ventricles (two-ventricle images) and 90 have one ventricle (one-ventricle images). We compared these results to each stand-alone model and to previous works including, 2D multiplane U-Net and 2D SegNet models. RESULTS: Using fivefold cross-validation, the ensemble method reported a Dice similarity coefficient (DSC) of 0.720 ± 0.074, absolute volumetric difference (VD) of 3.7 ± 4.1 cm3 , and a mean absolute surface distance (MAD) of 1.14 ± 0.41 mm on 75 two-ventricle test images. Using 90 test images with a single ventricle, the model after cross-validation reported DSC, VD, and MAD values of 0.806 ± 0.111, 3.5 ± 2.9 cm3 , and 1.37 ± 1.70 mm, respectively. Compared to alternatives, the proposed ensemble yielded a higher accuracy in segmentation on both test data sets. Our method required approximately 5 s to segment one image and was substantially faster than the state-of-the-art conventional methods. CONCLUSIONS: Compared to the state-of-the-art non-deep learning methods, our method based on deep learning was more efficient in segmenting neonatal cerebral lateral ventricles from 3D US images with comparable or better DSC, VD, and MAD performance. Our dataset was the largest to date (190 images) for this segmentation problem and the first to segment images that show only one lateral cerebral ventricle.


Assuntos
Ventrículos Cerebrais , Imageamento Tridimensional , Ventrículos Cerebrais/diagnóstico por imagem , Ventrículos do Coração/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Recém-Nascido , Ultrassonografia
13.
Med Phys ; 48(11): 6889-6900, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34418108

RESUMO

PURPOSE: Accurate detection of transition zone (TZ) prostate cancer (PCa) on magnetic resonance imaging (MRI) remains challenging using clinical subjective assessment due to overlap between PCa and benign prostatic hyperplasia (BPH). The objective of this paper is to describe a deep-learning-based framework for fully automated detection of PCa in the TZ using T2-weighted (T2W) and apparent diffusion coefficient (ADC) map MR images. METHOD: This was a single-center IRB-approved cross-sectional study of men undergoing 3T MRI on two systems. The dataset consisted of 196 patients (103 with and 93 without clinically significant [Grade Group 2 or higher] TZ PCa) to train and test our proposed methodology, with an additional 168 patients with peripheral zone PCa used only for training. We proposed an ensemble of classifiers in which multiple U-Net-based models are designed for prediction of TZ PCa location on ADC map MR images, with initial automated segmentation of the prostate to guide detection. We compared accuracy of ADC alone to T2W and combined ADC+T2W MRI for input images, and investigated improvements using ensembles over their constituent models with different methods of diversity in individual models by hyperparameter configuration, loss function and model architecture. RESULTS: Our developed algorithm reported sensitivity and precision of 0.829 and 0.617 in 56 test cases containing 31 instances of TZ PCa and in 25 patients without clinically significant TZ tumors. Patient-wise classification accuracy had an area under receiver operator characteristic curve (AUROC) of 0.974. Single U-Net models using ADC alone (sensitivity 0.829, precision 0.534) outperformed assessment using T2W (sensitivity 0.086, precision 0.081) and assessment using combined ADC+T2W (sensitivity 0.687, precision 0.489). While the ensemble of U-Nets with varying hyperparameters demonstrated the highest performance, all ensembles improved PCa detection compared to individual models, with sensitivities and precisions close to the collective best of constituent models. CONCLUSION: We describe a deep-learning-based method for fully automated TZ PCa detection using ADC map MR images that outperformed assessment by T2W and ADC+T2W.


Assuntos
Imageamento por Ressonância Magnética , Neoplasias da Próstata , Estudos Transversais , Imagem de Difusão por Ressonância Magnética , Humanos , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos
14.
J Med Imaging (Bellingham) ; 8(Suppl 1): 017503, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34435075

RESUMO

Purpose: The objective of this study is to develop and evaluate a fully automated, deep learning-based method for detection of COVID-19 infection from chest x-ray images. Approach: The proposed model was developed by replacing the final classifier layer in DenseNet201 with a new network consisting of global averaging layer, batch normalization layer, a dense layer with ReLU activation, and a final classification layer. Then, we performed an end-to-end training using the initial pretrained weights on all the layers. Our model was trained using a total of 8644 images with 4000 images each in normal and pneumonia cases and 644 in COVID-19 cases representing a large real dataset. The proposed method was evaluated based on accuracy, sensitivity, specificity, ROC curve, and F 1 -score using a test dataset comprising 1729 images (129 COVID-19, 800 normal, and 800 pneumonia). As a benchmark, we also compared the results of our method with those of seven state-of-the-art pretrained models and with a lightweight CNN architecture designed from scratch. Results: The proposed model based on DenseNet201 was able to achieve an accuracy of 94% in detecting COVID-19 and an overall accuracy of 92.19%. The model was able to achieve an AUC of 0.99 for COVID-19, 0.97 for normal, and 0.97 for pneumonia. The model was able to outperform alternative models in terms of overall accuracy, sensitivity, and specificity. Conclusions: Our proposed automated diagnostic model yielded an accuracy of 94% in the initial screening of COVID-19 patients and an overall accuracy of 92.19% using chest x-ray images.

15.
J Med Imaging (Bellingham) ; 8(2): 027501, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33681410

RESUMO

Purpose: The mean linear intercept (MLI) score is a common metric for quantification of injury in lung histopathology images. The automated estimation of the MLI score is a challenging task because it requires accurate segmentation of different biological components of the lung tissue. Therefore, the most widely used approaches for MLI quantification are based on manual/semi-automated assessment of lung histopathology images, which can be expensive and time-consuming. We describe a fully automated pipeline for MLI estimation, which is capable of producing results comparable to human raters. Approach: We use a convolutional neural network based on U-Net architecture to segment the diagnostically relevant tissue segments in the whole slide images (WSI) of the mouse lung tissue. The proposed method extracts multiple field-of-view (FOV) images from the tissue segments and screen the FOV images, rejecting images based on presence of certain biological structures (i.e., blood vessels and bronchi). We used color slicing and region growing for segmentation of different biological structures in each FOV image. Results: The proposed method was tested on ten WSIs from mice and compared against the scores provided by three human raters. In segmenting the relevant tissue segments, our method obtained a mean accuracy, Dice coefficient, and Hausdorff distance of 98.34%, 98.22%, and 109.68 µ m , respectively. Our proposed method yields a mean precision, recall, and F 1 -score of 93.37%, 83.47%, and 87.87%, respectively, in screening of FOV images. There was substantial agreement found between the proposed method and the manual scores (Fleiss Kappa score of 0.76). The mean difference between the calculated MLI score between the automated method and average rater's score was 2.33 ± 4.13 ( 4.25 % ± 5.67 % ). Conclusion: The proposed pipeline for automated calculation of the MLI score demonstrates high consistency and accuracy with human raters and can be a potential replacement for manual/semi-automated approaches in the field.

16.
Med Phys ; 48(1): 215-226, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33131085

RESUMO

PURPOSE: T1-mapping cardiac magnetic resonance (CMR) imaging permits noninvasive quantification of myocardial fibrosis (MF); however, manual delineation of myocardial boundaries is time-consuming and introduces user-dependent variability for such measurements. In this study, we compare several automated pipelines for myocardial segmentation of the left ventricle (LV) in native and contrast-enhanced T1-maps using fully convolutional neural networks (CNNs). METHODS: Sixty patients with known MF across three distinct cardiomyopathy states (20 ischemic (ICM), 20 dilated (DCM), and 20 hypertrophic (HCM)) underwent a standard CMR imaging protocol inclusive of cinematic (CINE), late gadolinium enhancement (LGE), and pre/post-contrast T1 imaging. Native and contrast-enhanced T1-mapping was performed using a shortened modified Look-Locker imaging (shMOLLI) technique at the basal, mid-level, and/or apex of the LV. Myocardial segmentations in native and post-contrast T1-maps were performed using three state-of-the-art CNN-based methods: standard U-Net, densely connected neural networks (Dense Nets), and attention networks (Attention Nets) after dividing the dataset using fivefold cross validation. These direct segmentation techniques were compared to an alternative registration-based segmentation method, wherein spatially corresponding CINE images are segmented automatically using U-Net, and a nonrigid registration technique transforms and propagates CINE contours to the myocardial regions of T1-maps. The methodologies were validated in 125 native and 100 contrast-enhanced T1-maps using standard segmentation accuracy metrics. Pearson correlation coefficient r and Bland-Altman analysis were used to compare the computed global T1 values derived by manual, U-Net, and CINE registration methodologies. RESULTS: The U-Net-based method yielded optimal results in myocardial segmentation of native, contrast-enhanced, and CINE images compared to Dense Nets and Attention Nets. The direct U-Net-based method outperformed the CINE registration-based method in native T1-maps, yielding Dice similarity coefficient (DSC) of 82.7 ± 12% compared to 81.4 ± 6.9% (P < 0.0001). However, in contrast-enhanced T1-maps, the CINE-registration-based method outperformed direct U-Net segmentation, yielding DSC of 77.0 ± 9.6% vs 74.2 ± 18% across all patient groups (P = 0.0014) and specifically 73.2 ± 7.3% vs 65.5 ± 18% in the ICM patient group. High linear correlation of global T1 values was demonstrated in Pearson analysis of the U-Net-based technique and the CINE-registration technique in both native T1-maps (r = 0.93, P < 0.0001 and r = 0.87, P < 0.0001, respectively) and contrast-enhanced T1-maps (r = 0.93, P < 0.0001 and r = 0.98, P < 0.0001, respectively). CONCLUSIONS: The direct U-Net-based myocardial segmentation technique provided accurate, fully automated segmentations in native and contrast-enhanced T1-maps. Myocardial borders can alternatively be segmented from spatially matched CINE images and applied to T1-maps via deformation and propagation through a modality-independent neighborhood descriptor (MIND). The direct U-Net approach is more efficient in myocardial segmentation of native T1-maps and eliminates cross-technique dependence. However, the CINE-registration-based technique may be more appropriate for contrast-enhanced T1-maps and/or for patients with dense regions of replacement fibrosis, such as those with ICM.


Assuntos
Meios de Contraste , Gadolínio , Coração , Imagem Cinética por Ressonância Magnética , Coração/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Miocárdio , Redes Neurais de Computação , Valor Preditivo dos Testes
17.
Eur Radiol ; 30(9): 5183-5190, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32350661

RESUMO

OBJECTIVES: To develop a deep learning-based method for automated classification of renal cell carcinoma (RCC) from benign solid renal masses using contrast-enhanced computed tomography (CECT) images. METHODS: This institutional review board-approved retrospective study evaluated CECT in 315 patients with 77 benign (57 oncocytomas, and 20 fat-poor angiomyolipoma) and 238 malignant (RCC: 123 clear cell, 69 papillary, and 46 chromophobe subtypes) tumors identified consecutively between 2015 and 2017. We employed a decision fusion-based model to aggregate slice level predictions determined by convolutional neural network (CNN) via a majority voting system to evaluate renal masses on CECT. The CNN-based model was trained using 7023 slices with renal masses manually extracted from CECT images of 155 patients, cropped automatically around kidneys, and augmented artificially. We also examined the fully automated approach for renal mass evaluation on CECT. Moreover, a 3D CNN was trained and tested using the same datasets and the obtained results were compared with those acquired from slice-wise algorithms. RESULTS: For differentiation of RCC versus benign solid masses, the semi-automated majority voting-based CNN algorithm achieved accuracy, precision, and recall of 83.75%, 89.05%, and 91.73% using 160 test cases, respectively. Fully automated pipeline yielded accuracy, precision, and recall of 77.36%, 85.92%, and 87.22% on the same test cases, respectively. 3D CNN reported accuracy, precision, and recall of 79.24%, 90.32%, and 84.21% using 160 test cases, respectively. CONCLUSIONS: A semi-automated majority voting CNN-based methodology enabled accurate classification of RCC from benign neoplasms among solid renal masses on CECT. KEY POINTS: • Our proposed semi-automated majority voting CNN-based algorithm achieved accuracy of 83.75% for the diagnosis of RCC from benign solid renal masses on CECT images. • A fully automated CNN-based methodology classified solid renal masses with moderate accuracy of 77.36% using the same test images. • Employing 3D CNN-based methodology yielded slightly lower accuracy for renal mass classification compared with the semi- automated 2D CNN-based algorithm (79.24%).


Assuntos
Adenoma Oxífilo/diagnóstico por imagem , Angiomiolipoma/diagnóstico por imagem , Carcinoma de Células Renais/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Neoplasias Renais/diagnóstico por imagem , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Adenoma Oxífilo/patologia , Algoritmos , Angiomiolipoma/patologia , Carcinoma de Células Renais/patologia , Meios de Contraste , Diagnóstico Diferencial , Humanos , Neoplasias Renais/patologia , Aprendizado de Máquina , Estudos Retrospectivos
18.
IEEE Trans Med Imaging ; 39(9): 2844-2855, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32142426

RESUMO

Vessel-wall-volume (VWV) is an important three-dimensional ultrasound (3DUS) metric used in the assessment of carotid plaque burden and monitoring changes in carotid atherosclerosis in response to medical treatment. To generate the VWV measurement, we proposed an approach that combined a voxel-based fully convolution network (Voxel-FCN) and a continuous max-flow module to automatically segment the carotid media-adventitia (MAB) and lumen-intima boundaries (LIB) from 3DUS images. Voxel-FCN includes an encoder consisting of a general 3D CNN and a 3D pyramid pooling module to extract spatial and contextual information, and a decoder using a concatenating module with an attention mechanism to fuse multi-level features extracted by the encoder. A continuous max-flow algorithm is used to improve the coarse segmentation provided by the Voxel-FCN. Using 1007 3DUS images, our approach yielded a Dice-similarity-coefficient (DSC) of 93.2±3.0% for the MAB in the common carotid artery (CCA), and 91.9±5.0% in the bifurcation by comparing algorithm and expert manual segmentations. We achieved a DSC of 89.5±6.7% and 89.3±6.8% for the LIB in the CCA and the bifurcation respectively. The mean errors between the algorithm-and manually-generated VWVs were 0.2±51.2 mm3 for the CCA and -4.0±98.2 mm3 for the bifurcation. The algorithm segmentation accuracy was comparable to intra-observer manual segmentation but our approach required less than 1s, which will not alter the clinical work-flow as 10s is required to image one side of the neck. Therefore, we believe that the proposed method could be used clinically for generating VWV to monitor progression and regression of carotid plaques.


Assuntos
Doenças das Artérias Carótidas , Placa Aterosclerótica , Algoritmos , Artérias Carótidas/diagnóstico por imagem , Doenças das Artérias Carótidas/diagnóstico por imagem , Humanos , Imageamento Tridimensional , Ultrassonografia
19.
Med Phys ; 47(4): 1645-1655, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31955415

RESUMO

PURPOSE: Three-dimensional (3D) late gadolinium enhancement magnetic resonance (LGE-MR) imaging enables the quantification of myocardial scar at high resolution with unprecedented volumetric visualization. Automated segmentation of myocardial scar is critical for the potential clinical translation of this technique given the number of tomographic images acquired. METHODS: In this paper, we describe the development of cascaded multi-planar U-Net (CMPU-Net) to efficiently segment the boundary of the left ventricle (LV) myocardium and scar from 3D LGE-MR images. In this approach, two subnets, each containing three U-Nets, were cascaded to first segment the LV myocardium and then segment the scar within the presegmented LV myocardium. The U-Nets were trained separately using two-dimensional (2D) slices extracted from axial, sagittal, and coronal slices of 3D LGE-MR images. We used 3D LGE-MR images from 34 subjects with chronic ischemic cardiomyopathy. The U-Nets were trained using 8430 slices, extracted in three orthogonal directions from 18 images. In the testing phase, the outputs of U-Nets of each subnet were combined using the majority voting system for final label prediction of each voxel in the image. The developed method was tested for accuracy by comparing its results to manual segmentations of LV myocardium and LV scar from 7250 slices extracted from 16 3D LGE-MR images. Our method was also compared to numerous alternative methods based on machine learning, energy minimization, and intensity-thresholds. RESULTS: Our algorithm reported a mean dice similarity coefficient (DSC), absolute volume difference (AVD), and Hausdorff distance (HD) of 85.14% ± 3.36%, 43.72 ± 27.18 cm3 , and 19.21 ± 4.74 mm for determining the boundaries of LV myocardium from LGE-MR images. Our method also yielded a mean DSC, AVD, and HD of 88.61% ± 2.54%, 9.33 ± 7.24 cm3 , and 17.04 ± 9.93 mm for LV scar segmentation on the unobserved test dataset. Our method significantly outperformed the alternative techniques in segmentation accuracy (P < 0.05). CONCLUSIONS: The CMPU-Net method provided fully automated segmentation of LV scar from 3D LGE-MR images and outperformed the alternative techniques.


Assuntos
Cicatriz/diagnóstico por imagem , Gadolínio , Ventrículos do Coração/diagnóstico por imagem , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Automação , Humanos
20.
J Magn Reson Imaging ; 51(4): 1223-1234, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31456317

RESUMO

BACKGROUND: Accurate detection and localization of prostate cancer (PCa) in men undergoing prostate MRI is a fundamental step for future targeted prostate biopsies and treatment planning. Fully automated localization of peripheral zone (PZ) PCa using the apparent diffusion coefficient (ADC) map might be clinically useful. PURPOSE/HYPOTHESIS: To describe automated localization of PCa in the PZ on ADC map MR images using an ensemble U-Net-based model. STUDY TYPE: Retrospective, case-control. POPULATION: In all, 226 patients (154 and 72 patients with and without clinically significant PZ PCa, respectively), training, and testing was performed using dataset images of 146 and 80 patients, respectively. FIELD STRENGTH: 3T, ADC maps. SEQUENCE: ADC map. ASSESSMENT: The ground truth was established by manual delineation of the prostate and prostate PZ tumors on ADC maps by dedicated radiologists using MRI-radical prostatectomy maps as a reference standard. Statistical Tests: Performance of the ensemble model was evaluated using Dice similarity coefficient (DSC), sensitivity, and specificity metrics on a per-slice basis. Receiver operating characteristic (ROC) curve and area under the curve (AUC) were employed as well. The paired t-test was used to test the differences between the performances of constituent networks of the ensemble model. RESULTS: Our developed algorithm yielded DSC, sensitivity, and specificity of 86.72% ± 9.93%, 85.76% ± 23.33%, and 76.44% ± 23.70%, respectively (mean ± standard deviation) on 80 test cases consisting of 41 and 39 instances from patients with and without clinically significant tumors including 660 extracted 2D slices. AUC was reported as 0.779. DATA CONCLUSION: An ensemble U-Net-based approach can accurately detect and segment PCa in the PZ from ADC map MR prostate images. LEVEL OF EVIDENCE: 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2020;51:1223-1234.


Assuntos
Imagem de Difusão por Ressonância Magnética , Neoplasias da Próstata , Humanos , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...