Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
Sci Rep ; 14(1): 7079, 2024 03 25.
Article in English | MEDLINE | ID: mdl-38528100

ABSTRACT

This observational study investigated the potential of radiomics as a non-invasive adjunct to CT in distinguishing COVID-19 lung nodules from other benign and malignant lung nodules. Lesion segmentation, feature extraction, and machine learning algorithms, including decision tree, support vector machine, random forest, feed-forward neural network, and discriminant analysis, were employed in the radiomics workflow. Key features such as Idmn, skewness, and long-run low grey level emphasis were identified as crucial in differentiation. The model demonstrated an accuracy of 83% in distinguishing COVID-19 from other benign nodules and 88% from malignant nodules. This study concludes that radiomics, through machine learning, serves as a valuable tool for non-invasive discrimination between COVID-19 and other benign and malignant lung nodules. The findings suggest the potential complementary role of radiomics in patients with COVID-19 pneumonia exhibiting lung nodules and suspicion of concurrent lung pathologies. The clinical relevance lies in the utilization of radiomics analysis for feature extraction and classification, contributing to the enhanced differentiation of lung nodules, particularly in the context of COVID-19.


Subject(s)
COVID-19 , Lung Neoplasms , Multiple Pulmonary Nodules , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Radiomics , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Retrospective Studies
2.
Sci Rep ; 14(1): 6996, 2024 Mar 24.
Article in English | MEDLINE | ID: mdl-38523137

ABSTRACT

Effective training of deep image segmentation models is challenging due to the need for abundant, high-quality annotations. To facilitate image annotation, we introduce Physics Informed Contour Selection (PICS)-an interpretable, physics-informed algorithm for rapid image segmentation without relying on labeled data. PICS draws inspiration from physics-informed neural networks (PINNs) and an active contour model called snake. It is fast and computationally lightweight because it employs cubic splines instead of a deep neural network as a basis function. Its training parameters are physically interpretable because they directly represent control knots of the segmentation curve. Traditional snakes involve minimization of the edge-based loss functionals by deriving the Euler-Lagrange equation followed by its numerical solution. However, PICS directly minimizes the loss functional, bypassing the Euler Lagrange equations. It is the first snake variant to minimize a region-based loss function instead of traditional edge-based loss functions. PICS uniquely models the three-dimensional (3D) segmentation process with an unsteady partial differential equation (PDE), which allows accelerated segmentation via transfer learning. To demonstrate its effectiveness, we apply PICS for 3D segmentation of the left ventricle on a publicly available cardiac dataset. We also demonstrate PICS's capacity to encode the prior shape information as a loss term by proposing a new convexity-preserving loss term for left ventricle. Overall, PICS presents several novelties in network architecture, transfer learning, and physics-inspired losses for image segmentation, thereby showing promising outcomes and potential for further refinement.

3.
Sci Rep ; 13(1): 19062, 2023 11 04.
Article in English | MEDLINE | ID: mdl-37925565

ABSTRACT

In an observational study conducted from 2016 to 2021, we assessed the utility of radiomics in differentiating between benign and malignant lung nodules detected on computed tomography (CT) scans. Patients in whom a final diagnosis regarding the lung nodules was available according to histopathology and/or 2017 Fleischner Society guidelines were included. The radiomics workflow included lesion segmentation, region of interest (ROI) definition, pre-processing, and feature extraction. Employing random forest feature selection, we identified ten important radiomic features for distinguishing between benign and malignant nodules. Among the classifiers tested, the Decision Tree model demonstrated superior performance, achieving 79% accuracy, 75% sensitivity, 85% specificity, 82% precision, and 90% F1 score. The implementation of the XGBoost algorithm further enhanced these results, yielding 89% accuracy, 89% sensitivity, 89% precision, and an F1 score of 89%, alongside a specificity of 85%. Our findings highlight tumor texture as the primary predictor of malignancy, emphasizing the importance of texture-based features in computational oncology. Thus, our study establishes radiomics as a powerful, non-invasive adjunct to CT scans in the differentiation of lung nodules, with significant implications for clinical decision-making, especially for indeterminate nodules, and the enhancement of diagnostic and predictive accuracy in this clinical context.


Subject(s)
Adenocarcinoma of Lung , Lung Neoplasms , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Retrospective Studies , Tomography, X-Ray Computed/methods , Adenocarcinoma of Lung/pathology , Lung/diagnostic imaging , Lung/pathology
4.
Sci Rep ; 12(1): 14855, 2022 09 01.
Article in English | MEDLINE | ID: mdl-36050323

ABSTRACT

The rapid progress in image-to-image translation methods using deep neural networks has led to advancements in the generation of synthetic CT (sCT) in MR-only radiotherapy workflow. Replacement of CT with MR reduces unnecessary radiation exposure, financial cost and enables more accurate delineation of organs at risk. Previous generative adversarial networks (GANs) have been oriented towards MR to sCT generation. In this work, we have implemented multiple augmented cycle consistent GANs. The augmentation involves structural information constraint (StructCGAN), optical flow consistency constraint (FlowCGAN) and the combination of both the conditions (SFCGAN). The networks were trained and tested on a publicly available Gold Atlas project dataset, consisting of T2-weighted MR and CT volumes of 19 subjects from 3 different sites. The network was tested on 8 volumes acquired from the third site with a different scanner to assess the generalizability of the network on multicenter data. The results indicate that all the networks are robust to scanner variations. The best model, SFCGAN achieved an average ME of 0.9   5.9 HU, an average MAE of 40.4   4.7 HU and 57.2   1.4 dB PSNR outperforming previous research works. Moreover, the optical flow constraint between consecutive frames preserves the consistency across all views compared to 2D image-to-image translation methods. SFCGAN exploits the features of both StructCGAN and FlowCGAN by delivering structurally robust and 3D consistent sCT images. The research work serves as a benchmark for further research in MR-only radiotherapy.


Subject(s)
Image Processing, Computer-Assisted , Optic Flow , Tomography, X-Ray Computed , Humans , Image Processing, Computer-Assisted/economics , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/economics , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/economics , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/economics , Tomography, X-Ray Computed/methods
5.
Med Image Anal ; 80: 102485, 2022 08.
Article in English | MEDLINE | ID: mdl-35679692

ABSTRACT

Examination of pathological images is the golden standard for diagnosing and screening many kinds of cancers. Multiple datasets, benchmarks, and challenges have been released in recent years, resulting in significant improvements in computer-aided diagnosis (CAD) of related diseases. However, few existing works focus on the digestive system. We released two well-annotated benchmark datasets and organized challenges for the digestive-system pathological cell detection and tissue segmentation, in conjunction with the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). This paper first introduces the two released datasets, i.e., signet ring cell detection and colonoscopy tissue segmentation, with the descriptions of data collection, annotation, and potential uses. We also report the set-up, evaluation metrics, and top-performing methods and results of two challenge tasks for cell detection and tissue segmentation. In particular, the challenge received 234 effective submissions from 32 participating teams, where top-performing teams developed advancing approaches and tools for the CAD of digestive pathology. To the best of our knowledge, these are the first released publicly available datasets with corresponding challenges for the digestive-system pathological detection and segmentation. The related datasets and results provide new opportunities for the research and application of digestive pathology.


Subject(s)
Benchmarking , Diagnosis, Computer-Assisted , Colonoscopy , Humans , Image Processing, Computer-Assisted/methods
6.
PLoS One ; 17(2): e0262913, 2022.
Article in English | MEDLINE | ID: mdl-35148354

ABSTRACT

We present the design and characterization of an X-ray imaging system consisting of an off-the-shelf CMOS sensor optically coupled to a CsI scintillator. The camera can perform both high-resolution and functional cardiac imaging. High-resolution 3D imaging requires microfocus X-ray tubes and expensive detectors, while pre-clinical functional cardiac imaging requires high flux pulsed (clinical) X-ray tubes and high-end cameras. Our work describes an X-ray camera, namely an "optically coupled X-ray(OCX) detector," used for both the aforementioned applications with no change in the specifications. We constructed the imaging detector with two different CMOS optical imaging cameras called CMOS sensors, 1.A monochrome CMOS sensor coupled with an f1.4 lens and 2.an RGB CMOS sensor coupled with an f0.95 prime lens. The imaging system consisted of our X-ray camera, micro-focus X-ray source (50kVp and 1mA), and a rotary stage controlled from a personal computer (PC) and LabVIEW interface. The detective quantum efficiency (DQE) of the imaging system(monochrome) estimated using a cascaded linear model was 17% at 10 lp/mm. The system modulation transfer function (MTF) and the noise power spectrum (NPS) were inputs to the DQE estimation. Because of the RGB camera's low quantum efficiency (QE), the OCX detector DQE was 19% at 5 lp/mm. The contrast to noise ratio (CNR) at different frame rates was studied using the capillary tubes filled with various dilutions of iodinated contrast agents. In-vivo cardiac angiography demonstrated that blood vessels of the order of 100 microns or above were visible at 40 frames per second despite the low X-ray flux. For high-resolution 3D imaging, the system was characterized by imaging a cylindrical micro-CT contrast phantom and comparing it against images from a commercial scanner.


Subject(s)
Coronary Vessels/diagnostic imaging , X-Ray Microtomography/methods , Animals , Feasibility Studies , Mice , Phantoms, Imaging , Radiographic Image Enhancement , Semiconductors , Signal-To-Noise Ratio , X-Ray Microtomography/instrumentation
7.
Sci Rep ; 11(1): 11579, 2021 06 02.
Article in English | MEDLINE | ID: mdl-34078928

ABSTRACT

Histopathology tissue analysis is considered the gold standard in cancer diagnosis and prognosis. Whole-slide imaging (WSI), i.e., the scanning and digitization of entire histology slides, are now being adopted across the world in pathology labs. Trained histopathologists can provide an accurate diagnosis of biopsy specimens based on WSI data. Given the dimensionality of WSIs and the increase in the number of potential cancer cases, analyzing these images is a time-consuming process. Automated segmentation of tumorous tissue helps in elevating the precision, speed, and reproducibility of research. In the recent past, deep learning-based techniques have provided state-of-the-art results in a wide variety of image analysis tasks, including the analysis of digitized slides. However, deep learning-based solutions pose many technical challenges, including the large size of WSI data, heterogeneity in images, and complexity of features. In this study, we propose a generalized deep learning-based framework for histopathology tissue analysis to address these challenges. Our framework is, in essence, a sequence of individual techniques in the preprocessing-training-inference pipeline which, in conjunction, improve the efficiency and the generalizability of the analysis. The combination of techniques we have introduced includes an ensemble segmentation model, division of the WSI into smaller overlapping patches while addressing class imbalances, efficient techniques for inference, and an efficient, patch-based uncertainty estimation framework. Our ensemble consists of DenseNet-121, Inception-ResNet-V2, and DeeplabV3Plus, where all the networks were trained end to end for every task. We demonstrate the efficacy and improved generalizability of our framework by evaluating it on a variety of histopathology tasks including breast cancer metastases (CAMELYON), colon cancer (DigestPath), and liver cancer (PAIP). Our proposed framework has state-of-the-art performance across all these tasks and is ranked within the top 5 currently for the challenges based on these datasets. The entire framework along with the trained models and the related documentation are made freely available at GitHub and PyPi. Our framework is expected to aid histopathologists in accurate and efficient initial diagnosis. Moreover, the estimated uncertainty maps will help clinicians to make informed decisions and further treatment planning or analysis.

8.
Front Comput Neurosci ; 15: 651959, 2021.
Article in English | MEDLINE | ID: mdl-33584235

ABSTRACT

[This corrects the article DOI: 10.3389/fncom.2020.00006.].

9.
Med Image Anal ; 67: 101854, 2021 01.
Article in English | MEDLINE | ID: mdl-33091742

ABSTRACT

Pathology Artificial Intelligence Platform (PAIP) is a free research platform in support of pathological artificial intelligence (AI). The main goal of the platform is to construct a high-quality pathology learning data set that will allow greater accessibility. The PAIP Liver Cancer Segmentation Challenge, organized in conjunction with the Medical Image Computing and Computer Assisted Intervention Society (MICCAI 2019), is the first image analysis challenge to apply PAIP datasets. The goal of the challenge was to evaluate new and existing algorithms for automated detection of liver cancer in whole-slide images (WSIs). Additionally, the PAIP of this year attempted to address potential future problems of AI applicability in clinical settings. In the challenge, participants were asked to use analytical data and statistical metrics to evaluate the performance of automated algorithms in two different tasks. The participants were given the two different tasks: Task 1 involved investigating Liver Cancer Segmentation and Task 2 involved investigating Viable Tumor Burden Estimation. There was a strong correlation between high performance of teams on both tasks, in which teams that performed well on Task 1 also performed well on Task 2. After evaluation, we summarized the top 11 team's algorithms. We then gave pathological implications on the easily predicted images for cancer segmentation and the challenging images for viable tumor burden estimation. Out of the 231 participants of the PAIP challenge datasets, a total of 64 were submitted from 28 team participants. The submitted algorithms predicted the automatic segmentation on the liver cancer with WSIs to an accuracy of a score estimation of 0.78. The PAIP challenge was created in an effort to combat the lack of research that has been done to address Liver cancer using digital pathology. It remains unclear of how the applicability of AI algorithms created during the challenge can affect clinical diagnoses. However, the results of this dataset and evaluation metric provided has the potential to aid the development and benchmarking of cancer diagnosis and segmentation.


Subject(s)
Artificial Intelligence , Liver Neoplasms , Algorithms , Humans , Image Processing, Computer-Assisted , Liver Neoplasms/diagnostic imaging , Tumor Burden
10.
Article in English | MEDLINE | ID: mdl-32116620

ABSTRACT

The accurate automatic segmentation of gliomas and its intra-tumoral structures is important not only for treatment planning but also for follow-up evaluations. Several methods based on 2D and 3D Deep Neural Networks (DNN) have been developed to segment brain tumors and to classify different categories of tumors from different MRI modalities. However, these networks are often black-box models and do not provide any evidence regarding the process they take to perform this task. Increasing transparency and interpretability of such deep learning techniques is necessary for the complete integration of such methods into medical practice. In this paper, we explore various techniques to explain the functional organization of brain tumor segmentation models and to extract visualizations of internal concepts to understand how these networks achieve highly accurate tumor segmentations. We use the BraTS 2018 dataset to train three different networks with standard architectures and outline similarities and differences in the process that these networks take to segment brain tumors. We show that brain tumor segmentation networks learn certain human-understandable disentangled concepts on a filter level. We also show that they take a top-down or hierarchical approach to localizing the different parts of the tumor. We then extract visualizations of some internal feature maps and also provide a measure of uncertainty with regards to the outputs of the models to give additional qualitative evidence about the predictions of these networks. We believe that the emergence of such human-understandable organization and concepts might aid in the acceptance and integration of such methods in medical diagnosis.

11.
Front Neurosci ; 14: 27, 2020.
Article in English | MEDLINE | ID: mdl-32153349

ABSTRACT

Biomedical imaging Is an important source of information in cancer research. Characterizations of cancer morphology at onset, progression, and in response to treatment provide complementary information to that gleaned from genomics and clinical data. Accurate extraction and classification of both visual and latent image features Is an increasingly complex challenge due to the increased complexity and resolution of biomedical image data. In this paper, we present four deep learning-based image analysis methods from the Computational Precision Medicine (CPM) satellite event of the 21st International Medical Image Computing and Computer Assisted Intervention (MICCAI 2018) conference. One method Is a segmentation method designed to segment nuclei in whole slide tissue images (WSIs) of adult diffuse glioma cases. It achieved a Dice similarity coefficient of 0.868 with the CPM challenge datasets. Three methods are classification methods developed to categorize adult diffuse glioma cases into oligodendroglioma and astrocytoma classes using radiographic and histologic image data. These methods achieved accuracy values of 0.75, 0.80, and 0.90, measured as the ratio of the number of correct classifications to the number of total cases, with the challenge datasets. The evaluations of the four methods indicate that (1) carefully constructed deep learning algorithms are able to produce high accuracy in the analysis of biomedical image data and (2) the combination of radiographic with histologic image information improves classification performance.

12.
Med Image Anal ; 51: 21-45, 2019 01.
Article in English | MEDLINE | ID: mdl-30390512

ABSTRACT

Deep fully convolutional neural network (FCN) based architectures have shown great potential in medical image segmentation. However, such architectures usually have millions of parameters and inadequate number of training samples leading to over-fitting and poor generalization. In this paper, we present a novel DenseNet based FCN architecture for cardiac segmentation which is parameter and memory efficient. We propose a novel up-sampling path which incorporates long skip and short-cut connections to overcome the feature map explosion in conventional FCN based architectures. In order to process the input images at multiple scales and view points simultaneously, we propose to incorporate Inception module's parallel structures. We propose a novel dual loss function whose weighting scheme allows to combine advantages of cross-entropy and Dice loss leading to qualitative improvements in segmentation. We demonstrate computational efficacy of incorporating conventional computer vision techniques for region of interest detection in an end-to-end deep learning based segmentation framework. From the segmentation maps we extract clinically relevant cardiac parameters and hand-craft features which reflect the clinical diagnostic analysis and train an ensemble system for cardiac disease classification. We validate our proposed network architecture on three publicly available datasets, namely: (i) Automated Cardiac Diagnosis Challenge (ACDC-2017), (ii) Left Ventricular segmentation challenge (LV-2011), (iii) 2015 Kaggle Data Science Bowl cardiac challenge data. Our approach in ACDC-2017 challenge stood second place for segmentation and first place in automated cardiac disease diagnosis tasks with an accuracy of 100% on a limited testing set (n=50). In the LV-2011 challenge our approach attained 0.74 Jaccard index, which is so far the highest published result in fully automated algorithms. In the Kaggle challenge our approach for LV volume gave a Continuous Ranked Probability Score (CRPS) of 0.0127, which would have placed us tenth in the original challenge. Our approach combined both cardiac segmentation and disease diagnosis into a fully automated framework which is computationally efficient and hence has the potential to be incorporated in computer-aided diagnosis (CAD) tools for clinical application.


Subject(s)
Cardiovascular Diseases/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine/methods , Neural Networks, Computer , Algorithms , Humans , Reproducibility of Results
13.
IEEE Trans Med Imaging ; 37(11): 2514-2525, 2018 11.
Article in English | MEDLINE | ID: mdl-29994302

ABSTRACT

Delineation of the left ventricular cavity, myocardium, and right ventricle from cardiac magnetic resonance images (multi-slice 2-D cine MRI) is a common clinical task to establish diagnosis. The automation of the corresponding tasks has thus been the subject of intense research over the past decades. In this paper, we introduce the "Automatic Cardiac Diagnosis Challenge" dataset (ACDC), the largest publicly available and fully annotated dataset for the purpose of cardiac MRI (CMR) assessment. The dataset contains data from 150 multi-equipments CMRI recordings with reference measurements and classification from two medical experts. The overarching objective of this paper is to measure how far state-of-the-art deep learning methods can go at assessing CMRI, i.e., segmenting the myocardium and the two ventricles as well as classifying pathologies. In the wake of the 2017 MICCAI-ACDC challenge, we report results from deep learning methods provided by nine research groups for the segmentation task and four groups for the classification task. Results show that the best methods faithfully reproduce the expert analysis, leading to a mean value of 0.97 correlation score for the automatic extraction of clinical indices and an accuracy of 0.96 for automatic diagnosis. These results clearly open the door to highly accurate and fully automatic analysis of cardiac CMRI. We also identify scenarios for which deep learning methods are still failing. Both the dataset and detailed results are publicly available online, while the platform will remain open for new submissions.


Subject(s)
Cardiac Imaging Techniques/methods , Deep Learning , Heart/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Databases, Factual , Female , Heart Diseases/diagnostic imaging , Humans , Male
14.
J Med Imaging (Bellingham) ; 4(4): 041311, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29285516

ABSTRACT

The work explores the use of denoising autoencoders (DAEs) for brain lesion detection, segmentation, and false-positive reduction. Stacked denoising autoencoders (SDAEs) were pretrained using a large number of unlabeled patient volumes and fine-tuned with patches drawn from a limited number of patients ([Formula: see text], 40, 65). The results show negligible loss in performance even when SDAE was fine-tuned using 20 labeled patients. Low grade glioma (LGG) segmentation was achieved using a transfer learning approach in which a network pretrained with high grade glioma data was fine-tuned using LGG image patches. The networks were also shown to generalize well and provide good segmentation on unseen BraTS 2013 and BraTS 2015 test data. The manuscript also includes the use of a single layer DAE, referred to as novelty detector (ND). ND was trained to accurately reconstruct nonlesion patches. The reconstruction error maps of test data were used to localize lesions. The error maps were shown to assign unique error distributions to various constituents of the glioma, enabling localization. The ND learns the nonlesion brain accurately as it was also shown to provide good segmentation performance on ischemic brain lesions in images from a different database.

15.
BMC Med Imaging ; 17(1): 25, 2017 04 05.
Article in English | MEDLINE | ID: mdl-28381245

ABSTRACT

BACKGROUND: Tagged Magnetic Resonance (tMR) imaging is a powerful technique for determining cardiovascular abnormalities. One of the reasons for tMR not being used in routine clinical practice is the lack of easy-to-use tools for image analysis and strain mapping. In this paper, we introduce a novel interdisciplinary method based on correlation image velocimetry (CIV) to estimate cardiac deformation and strain maps from tMR images. METHODS: CIV, a cross-correlation based pattern matching algorithm, analyses a pair of images to obtain the displacement field at sub-pixel accuracy with any desired spatial resolution. This first time application of CIV to tMR image analysis is implemented using an existing open source Matlab-based software called UVMAT. The method, which requires two main input parameters namely correlation box size (C B ) and search box size (S B ), is first validated using a synthetic grid image with grid sizes representative of typical tMR images. Phantom and patient images obtained from a Medical Imaging grand challenge dataset ( http://stacom.cardiacatlas.org/motion-tracking-challenge/ ) were then analysed to obtain cardiac displacement fields and strain maps. The results were then compared with estimates from Harmonic Phase analysis (HARP) technique. RESULTS: For a known displacement field imposed on both the synthetic grid image and the phantom image, CIV is accurate for 3-pixel and larger displacements on a 512 × 512 image with (C B ,S B )=(25,55) pixels. Further validation of our method is achieved by showing that our estimated landmark positions on patient images fall within the inter-observer variability in the ground truth. The effectiveness of our approach to analyse patient images is then established by calculating dense displacement fields throughout a cardiac cycle, and were found to be physiologically consistent. Circumferential strains were estimated at the apical, mid and basal slices of the heart, and were shown to compare favorably with those of HARP over the entire cardiac cycle, except in a few (∼4) of the segments in the 17-segment AHA model. The radial strains, however, are underestimated by our method in most segments when compared with HARP. CONCLUSIONS: In summary, we have demonstrated the capability of CIV to accurately and efficiently quantify cardiac deformation from tMR images. Furthermore, physiologically consistent displacement fields and circumferential strain curves in most regions of the heart indicate that our approach, upon automating some pre-processing steps and testing in clinical trials, can potentially be implemented in a clinical setting.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Myocardium/pathology , Algorithms , Humans , Image Interpretation, Computer-Assisted/methods , Phantoms, Imaging , Rheology
16.
Neuroimage ; 148: 77-102, 2017 03 01.
Article in English | MEDLINE | ID: mdl-28087490

ABSTRACT

In conjunction with the ISBI 2015 conference, we organized a longitudinal lesion segmentation challenge providing training and test data to registered participants. The training data consisted of five subjects with a mean of 4.4 time-points, and test data of fourteen subjects with a mean of 4.4 time-points. All 82 data sets had the white matter lesions associated with multiple sclerosis delineated by two human expert raters. Eleven teams submitted results using state-of-the-art lesion segmentation algorithms to the challenge, with ten teams presenting their results at the conference. We present a quantitative evaluation comparing the consistency of the two raters as well as exploring the performance of the eleven submitted results in addition to three other lesion segmentation algorithms. The challenge presented three unique opportunities: (1) the sharing of a rich data set; (2) collaboration and comparison of the various avenues of research being pursued in the community; and (3) a review and refinement of the evaluation metrics currently in use. We report on the performance of the challenge participants, as well as the construction and evaluation of a consensus delineation. The image data and manual delineations will continue to be available for download, through an evaluation website2 as a resource for future researchers in the area. This data resource provides a platform to compare existing methods in a fair and consistent manner to each other and multiple manual raters.


Subject(s)
Multiple Sclerosis/diagnostic imaging , Adult , Algorithms , Female , Humans , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Longitudinal Studies , Magnetic Resonance Imaging , Male , Middle Aged , Observer Variation , White Matter/diagnostic imaging
17.
J Digit Imaging ; 27(4): 514-9, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24639063

ABSTRACT

Stereology is a volume estimation method, typically applied to diagnostic imaging examinations in population studies where planimetry is too time-consuming (Chapman et al. Kidney Int 64:1035-1045, 2003), to obtain quantitative measurements (Nyengaard J Am Soc Nephrol 10:1100-1123, 1999, Michel and Cruz-Orive J Microsc 150:117-136, 1988) of certain structures or organs. However, true segmentation is required in order to perform advanced analysis of the tissues. This paper describes a novel method for segmentation of region(s) of interest using stereology data as prior information. The result is an efficient segmentation method for structures that cannot be easily segmented using other methods.


Subject(s)
Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Magnetic Resonance Imaging/methods , Polycystic Kidney Diseases/pathology , Algorithms , Humans , Kidney/pathology , Organ Size
18.
Technol Cancer Res Treat ; 12(6): 545-57, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23745787

ABSTRACT

In this work, we have proposed an on-line computer-aided diagnostic system called "UroImage" that classifies a Transrectal Ultrasound (TRUS) image into cancerous or non-cancerous with the help of non-linear Higher Order Spectra (HOS) features and Discrete Wavelet Transform (DWT) coefficients. The UroImage system consists of an on-line system where five significant features (one DWT-based feature and four HOS-based features) are extracted from the test image. These on-line features are transformed by the classifier parameters obtained using the training dataset to determine the class. We trained and tested six classifiers. The dataset used for evaluation had 144 TRUS images which were split into training and testing sets. Three-fold and ten-fold cross-validation protocols were adopted for training and estimating the accuracy of the classifiers. The ground truth used for training was obtained using the biopsy results. Among the six classifiers, using 10-fold cross-validation technique, Support Vector Machine and Fuzzy Sugeno classifiers presented the best classification accuracy of 97.9% with equally high values for sensitivity, specificity and positive predictive value. Our proposed automated system, which achieved more than 95% values for all the performance measures, can be an adjunct tool to provide an initial diagnosis for the identification of patients with prostate cancer. The technique, however, is limited by the limitations of 2D ultrasound guided biopsy, and we intend to improve our technique by using 3D TRUS images in the future.


Subject(s)
Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Adult , Aged , Aged, 80 and over , Algorithms , Humans , Image Interpretation, Computer-Assisted , Male , Middle Aged , Prostate/pathology , Rectum/diagnostic imaging , Retrospective Studies , Sensitivity and Specificity , Support Vector Machine , Ultrasonography
19.
AJR Am J Roentgenol ; 200(6): W621-7, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23701093

ABSTRACT

OBJECTIVE: The purpose of this study is to evaluate radiation dose reduction strategies in perfusion CT by using a biologic phantom. MATERIALS AND METHODS: A formalin-preserved porcine liver was submerged in a 32-cm-wide acrylic phantom filled with water. The portal vein was connected to a continuous flow pump. The phantom was scanned with a perfusion CT protocol using 80 kVp and 400 mAs, every 1 second, for 50 seconds. This was repeated using 100 and 20 mAs. It was also repeated again using 400 mAs to assess reproducibility. A sparser scan frequency was simulated retrospectively. Blood flow was determined for each dataset using the maximum slope and deconvolution methods. RESULTS: Measurements of the mean blood flow values in identical regions of interest had a percent difference of 7% for repeated perfusion CT protocols using the same settings regardless of perfusion model used. The 100 mAs scans agreed with 400 mAs scans, with percent differences of 21% and 31% for the maximum slope and deconvolution methods, respectively. At a simulated frequency of one scan every 4 seconds, blood flow values differed up to 17% and 60% from the reference scan for the maximum slope and deconvolution methods, respectively. At 20 mAs and one scan every 1 second, or 400 mAs and a simulated frequency of one scan every 8 seconds, both computation methods failed to provide accurate blood flow estimates. CONCLUSION: The biologic phantom showed reproducible measurements that can help in optimizing perfusion CT protocols by determining both the acquisition parameters that affect radiation dose and the accuracy of estimates from different perfusion models.


Subject(s)
Liver/diagnostic imaging , Radiation Dosage , Tomography, X-Ray Computed/methods , Animals , Contrast Media/administration & dosage , Equipment Design , Liver/blood supply , Phantoms, Imaging , Reproducibility of Results , Software , Swine
20.
Comput Methods Programs Biomed ; 110(1): 66-75, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23122720

ABSTRACT

Characterization of carotid atherosclerosis and classification into either symptomatic or asymptomatic is crucial in terms of diagnosis and treatment planning for a range of cardiovascular diseases. This paper presents a computer-aided diagnosis (CAD) system (Atheromatic) that analyzes ultrasound images and classifies them into symptomatic and asymptomatic. The classification result is based on a combination of discrete wavelet transform, higher order spectra (HOS) and textural features. In this study, we compare support vector machine (SVM) classifiers with different kernels. The classifier with a radial basis function (RBF) kernel achieved an average accuracy of 91.7% as well as a sensitivity of 97%, and specificity of 80%. Thus, it is evident that the selected features and the classifier combination can efficiently categorize plaques into symptomatic and asymptomatic classes. Moreover, a novel symptomatic asymptomatic carotid index (SACI), which is an integrated index that is based on the significant features, has been proposed in this work. Each analyzed ultrasound image yields on SACI number. A high SACI value indicates that the image shows symptomatic and low value indicates asymptomatic plaques. We hope this SACI can support vascular surgeons during routine screening for asymptomatic plaques.


Subject(s)
Diagnosis, Computer-Assisted/methods , Plaque, Atherosclerotic/diagnostic imaging , Adult , Aged , Aged, 80 and over , Algorithms , Carotid Stenosis/classification , Carotid Stenosis/diagnosis , Carotid Stenosis/diagnostic imaging , Female , Humans , Image Interpretation, Computer-Assisted , Male , Middle Aged , Plaque, Atherosclerotic/classification , Plaque, Atherosclerotic/diagnosis , Support Vector Machine , Ultrasonography , Wavelet Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...