Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 35
Filter
1.
Comput Methods Programs Biomed ; 250: 108158, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38604010

ABSTRACT

BACKGROUND AND OBJECTIVE: In radiotherapy treatment planning, respiration-induced motion introduces uncertainty that, if not appropriately considered, could result in dose delivery problems. 4D cone-beam computed tomography (4D-CBCT) has been developed to provide imaging guidance by reconstructing a pseudo-motion sequence of CBCT volumes through binning projection data into breathing phases. However, it suffers from artefacts and erroneously characterizes the averaged breathing motion. Furthermore, conventional 4D-CBCT can only be generated post-hoc using the full sequence of kV projections after the treatment is complete, limiting its utility. Hence, our purpose is to develop a deep-learning motion model for estimating 3D+t CT images from treatment kV projection series. METHODS: We propose an end-to-end learning-based 3D motion modelling and 4DCT reconstruction model named 4D-Precise, abbreviated from Probabilistic reconstruction of image sequences from CBCT kV projections. The model estimates voxel-wise motion fields and simultaneously reconstructs a 3DCT volume at any arbitrary time point of the input projections by transforming a reference CT volume. Developing a Torch-DRR module, it enables end-to-end training by computing Digitally Reconstructed Radiographs (DRRs) in PyTorch. During training, DRRs with matching projection angles to the input kVs are automatically extracted from reconstructed volumes and their structural dissimilarity to inputs is penalised. We introduced a novel loss function to regulate spatio-temporal motion field variations across the CT scan, leveraging planning 4DCT for prior motion distribution estimation. RESULTS: The model is trained patient-specifically using three kV scan series, each including over 1200 angular/temporal projections, and tested on three other scan series. Imaging data from five patients are analysed here. Also, the model is validated on a simulated paired 4DCT-DRR dataset created using the Surrogate Parametrised Respiratory Motion Modelling (SuPReMo). The results demonstrate that the reconstructed volumes by 4D-Precise closely resemble the ground-truth volumes in terms of Dice, volume similarity, mean contour distance, and Hausdorff distance, whereas 4D-Precise achieves smoother deformations and fewer negative Jacobian determinants compared to SuPReMo. CONCLUSIONS: Unlike conventional 4DCT reconstruction techniques that ignore breath inter-cycle motion variations, the proposed model computes both intra-cycle and inter-cycle motions. It represents motion over an extended timeframe, covering several minutes of kV scan series.


Subject(s)
Cone-Beam Computed Tomography , Four-Dimensional Computed Tomography , Radiotherapy Planning, Computer-Assisted , Respiration , Four-Dimensional Computed Tomography/methods , Humans , Cone-Beam Computed Tomography/methods , Radiotherapy Planning, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Algorithms , Image Processing, Computer-Assisted/methods , Phantoms, Imaging , Movement , Motion , Deep Learning
2.
Med Image Anal ; 93: 103097, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38325154

ABSTRACT

Determining early-stage prognostic markers and stratifying patients for effective treatment are two key challenges for improving outcomes for melanoma patients. Previous studies have used tumour transcriptome data to stratify patients into immune subgroups, which were associated with differential melanoma specific survival and potential predictive biomarkers. However, acquiring transcriptome data is a time-consuming and costly process. Moreover, it is not routinely used in the current clinical workflow. Here, we attempt to overcome this by developing deep learning models to classify gigapixel haematoxylin and eosin (H&E) stained pathology slides, which are well established in clinical workflows, into these immune subgroups. We systematically assess six different multiple instance learning (MIL) frameworks, using five different image resolutions and three different feature extraction methods. We show that pathology-specific self-supervised models using 10x resolution patches generate superior representations for the classification of immune subtypes. In addition, in a primary melanoma dataset, we achieve a mean area under the receiver operating characteristic curve (AUC) of 0.80 for classifying histopathology images into 'high' or 'low immune' subgroups and a mean AUC of 0.82 in an independent TCGA melanoma dataset. Furthermore, we show that these models are able to stratify patients into 'high' and 'low immune' subgroups with significantly different melanoma specific survival outcomes (log rank test, P< 0.005). We anticipate that MIL methods will allow us to find new biomarkers of high importance, act as a tool for clinicians to infer the immune landscape of tumours and stratify patients, without needing to carry out additional expensive genetic tests.


Subject(s)
Melanoma , Humans , Melanoma/diagnostic imaging , Melanoma/genetics , ROC Curve , Staining and Labeling , Workflow , Biomarkers
3.
Article in English | MEDLINE | ID: mdl-37022057

ABSTRACT

Modern radiotherapy delivers treatment plans optimised on an individual patient level, using CT-based 3D models of patient anatomy. This optimisation is fundamentally based on simple assumptions about the relationship between radiation dose delivered to the cancer (increased dose will increase cancer control) and normal tissue (increased dose will increase rate of side effects). The details of these relationships are still not well understood, especially for radiation-induced toxicity. We propose a convolutional neural network based on multiple instance learning to analyse toxicity relationships for patients receiving pelvic radiotherapy. A dataset comprising of 315 patients were included in this study; with 3D dose distributions, pre-treatment CT scans with annotated abdominal structures, and patient-reported toxicity scores provided for each participant. In addition, we propose a novel mechanism for segregating the attentions over space and dose/imaging features independently for a better understanding of the anatomical distribution of toxicity. Quantitative and qualitative experiments were performed to evaluate the network performance. The proposed network could predict toxicity with 80% accuracy. Attention analysis over space demonstrated that there was a significant association between radiation dose to the anterior and right iliac of the abdomen and patient-reported toxicity. Experimental results showed that the proposed network had outstanding performance for toxicity prediction, localisation and explanation with the ability of generalisation for an unseen dataset.

4.
Med Image Anal ; 83: 102678, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36403308

ABSTRACT

Deformable image registration (DIR) can be used to track cardiac motion. Conventional DIR algorithms aim to establish a dense and non-linear correspondence between independent pairs of images. They are, nevertheless, computationally intensive and do not consider temporal dependencies to regulate the estimated motion in a cardiac cycle. In this paper, leveraging deep learning methods, we formulate a novel hierarchical probabilistic model, termed DragNet, for fast and reliable spatio-temporal registration in cine cardiac magnetic resonance (CMR) images and for generating synthetic heart motion sequences. DragNet is a variational inference framework, which takes an image from the sequence in combination with the hidden states of a recurrent neural network (RNN) as inputs to an inference network per time step. As part of this framework, we condition the prior probability of the latent variables on the hidden states of the RNN utilised to capture temporal dependencies. We further condition the posterior of the motion field on a latent variable from hierarchy and features from the moving image. Subsequently, the RNN updates the hidden state variables based on the feature maps of the fixed image and the latent variables. Different from traditional methods, DragNet performs registration on unseen sequences in a forward pass, which significantly expedites the registration process. Besides, DragNet enables generating a large number of realistic synthetic image sequences given only one frame, where the corresponding deformations are also retrieved. The probabilistic framework allows for computing spatio-temporal uncertainties in the estimated motion fields. Our results show that DragNet performance is comparable with state-of-the-art methods in terms of registration accuracy, with the advantage of offering analytical pixel-wise motion uncertainty estimation across a cardiac cycle and being a motion generator. We will make our code publicly available.

5.
Med Image Anal ; 75: 102276, 2022 01.
Article in English | MEDLINE | ID: mdl-34753021

ABSTRACT

Automatic shape anomaly detection in large-scale imaging data can be useful for screening suboptimal segmentations and pathologies altering the cardiac morphology without intensive manual labour. We propose a deep probabilistic model for local anomaly detection in sequences of heart shapes, modelled as point sets, in a cardiac cycle. A deep recurrent encoder-decoder network captures the spatio-temporal dependencies to predict the next shape in the cycle and thus derive the outlier points that are attributed to excessive deviations from the network prediction. A predictive mixture distribution models the inlier and outlier classes via Gaussian and uniform distributions, respectively. A Gibbs sampling Expectation-Maximisation (EM) algorithm computes soft anomaly scores of the points via the posterior probabilities of each class in the E-step and estimates the parameters of the network and the predictive distribution in the M-step. We demonstrate the versatility of the method using two shape datasets derived from: (i) one million biventricular CMR images from 20,000 participants in the UK Biobank (UKB), and (ii) routine diagnostic imaging from Multi-Centre, Multi-Vendor, and Multi-Disease Cardiac Image (M&Ms). Experiments show that the detected shape anomalies in the UKB dataset are mostly associated with poor segmentation quality, and the predicted shape sequences show significant improvement over the input sequences. Furthermore, evaluations on U-Net based shapes from the M&Ms dataset reveals that the anomalies are attributable to the underlying pathologies that affect the ventricles. The proposed model can therefore be used as an effective mechanism to sift shape anomalies in large-scale cardiac imaging pipelines for further analysis.


Subject(s)
Algorithms , Models, Statistical , Heart/diagnostic imaging , Heart Ventricles/diagnostic imaging , Humans , Image Processing, Computer-Assisted , Motion
6.
IEEE Trans Med Imaging ; 39(5): 1380-1391, 2020 05.
Article in English | MEDLINE | ID: mdl-31647422

ABSTRACT

Generalized nucleus segmentation techniques can contribute greatly to reducing the time to develop and validate visual biomarkers for new digital pathology datasets. We summarize the results of MoNuSeg 2018 Challenge whose objective was to develop generalizable nuclei segmentation techniques in digital pathology. The challenge was an official satellite event of the MICCAI 2018 conference in which 32 teams with more than 80 participants from geographically diverse institutes participated. Contestants were given a training set with 30 images from seven organs with annotations of 21,623 individual nuclei. A test dataset with 14 images taken from seven organs, including two organs that did not appear in the training set was released without annotations. Entries were evaluated based on average aggregated Jaccard index (AJI) on the test set to prioritize accurate instance segmentation as opposed to mere semantic segmentation. More than half the teams that completed the challenge outperformed a previous baseline. Among the trends observed that contributed to increased accuracy were the use of color normalization as well as heavy data augmentation. Additionally, fully convolutional networks inspired by variants of U-Net, FCN, and Mask-RCNN were popularly used, typically based on ResNet or VGG base architectures. Watershed segmentation on predicted semantic segmentation maps was a popular post-processing strategy. Several of the top techniques compared favorably to an individual human annotator and can be used with confidence for nuclear morphometrics.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Cell Nucleus , Humans
7.
Med Image Anal ; 56: 26-42, 2019 08.
Article in English | MEDLINE | ID: mdl-31154149

ABSTRACT

Population imaging studies generate data for developing and implementing personalised health strategies to prevent, or more effectively treat disease. Large prospective epidemiological studies acquire imaging for pre-symptomatic populations. These studies enable the early discovery of alterations due to impending disease, and enable early identification of individuals at risk. Such studies pose new challenges requiring automatic image analysis. To date, few large-scale population-level cardiac imaging studies have been conducted. One such study stands out for its sheer size, careful implementation, and availability of top quality expert annotation; the UK Biobank (UKB). The resulting massive imaging datasets (targeting ca. 100,000 subjects) has put published approaches for cardiac image quantification to the test. In this paper, we present and evaluate a cardiac magnetic resonance (CMR) image analysis pipeline that properly scales up and can provide a fully automatic analysis of the UKB CMR study. Without manual user interactions, our pipeline performs end-to-end image analytics from multi-view cine CMR images all the way to anatomical and functional bi-ventricular quantification. All this, while maintaining relevant quality controls of the CMR input images, and resulting image segmentations. To the best of our knowledge, this is the first published attempt to fully automate the extraction of global and regional reference ranges of all key functional cardiovascular indexes, from both left and right cardiac ventricles, for a population of 20,000 subjects imaged at 50 time frames per subject, for a total of one million CMR volumes. In addition, our pipeline provides 3D anatomical bi-ventricular models of the heart. These models enable the extraction of detailed information of the morphodynamics of the two ventricles for subsequent association to genetic, omics, lifestyle habits, exposure information, and other information provided in population imaging studies. We validated our proposed CMR analytics pipeline against manual expert readings on a reference cohort of 4620 subjects with contour delineations and corresponding clinical indexes. Our results show broad significant agreement between the manually obtained reference indexes, and those automatically computed via our framework. 80.67% of subjects were processed with mean contour distance of less than 1 pixel, and 17.50% with mean contour distance between 1 and 2 pixels. Finally, we compare our pipeline with a recently published approach reporting on UKB data, and based on deep learning. Our comparison shows similar performance in terms of segmentation accuracy with respect to human experts.


Subject(s)
Heart Ventricles/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine/methods , Models, Statistical , Neural Networks, Computer , Biological Specimen Banks , Female , Humans , Imaging, Three-Dimensional , Male , Pattern Recognition, Automated , United Kingdom
8.
IEEE Trans Image Process ; 28(7): 3246-3260, 2019 Jul.
Article in English | MEDLINE | ID: mdl-30703023

ABSTRACT

The recognition of different cell compartments, the types of cells, and their interactions is a critical aspect of quantitative cell biology. However, automating this problem has proven to be non-trivial and requires solving multi-class image segmentation tasks that are challenging owing to the high similarity of objects from different classes and irregularly shaped structures. To alleviate this, graphical models are useful due to their ability to make use of prior knowledge and model inter-class dependences. Directed acyclic graphs, such as trees, have been widely used to model top-down statistical dependences as a prior for improved image segmentation. However, using trees, a few inter-class constraints can be captured. To overcome this limitation, we propose polytree graphical models that capture label proximity relations more naturally compared to tree-based approaches. A novel recursive mechanism based on two-pass message passing was developed to efficiently calculate closed-form posteriors of graph nodes on polytrees. The algorithm is evaluated on simulated data and on two publicly available fluorescence microscopy datasets, outperforming directed trees and three state-of-the-art convolutional neural networks, namely, SegNet, DeepLab, and PSPNet. Polytrees are shown to outperform directed trees in predicting segmentation error by highlighting areas in the segmented image that do not comply with prior knowledge. This paves the way to uncertainty measures on the resulting segmentation and guide subsequent segmentation refinement.

9.
Med Image Anal ; 53: 47-63, 2019 04.
Article in English | MEDLINE | ID: mdl-30684740

ABSTRACT

A probabilistic framework for registering generalised point sets comprising multiple voxel-wise data features such as positions, orientations and scalar-valued quantities, is proposed. It is employed for the analysis of magnetic resonance diffusion tensor image (DTI)-derived quantities, such as fractional anisotropy (FA) and fibre orientation, across multiple subjects. A hybrid Student's t-Watson-Gaussian mixture model-based non-rigid registration framework is formulated for the joint registration and clustering of voxel-wise DTI-derived data, acquired from multiple subjects. The proposed approach jointly estimates the non-rigid transformations necessary to register an unbiased mean template (represented as a 7-dimensional hybrid point set comprising spatial positions, fibre orientations and FA values) to white matter regions of interest (ROIs), and approximates the joint distribution of voxel spatial positions, their associated principal diffusion axes, and FA. Specific white matter ROIs, namely, the corpus callosum and cingulum, are analysed across healthy control (HC) subjects (K = 20 samples) and patients diagnosed with mild cognitive impairment (MCI) (K = 20 samples) or Alzheimer's disease (AD) (K = 20 samples) using the proposed framework, facilitating inter-group comparisons of FA and fibre orientations. Group-wise analyses of the latter is not afforded by conventional approaches such as tract-based spatial statistics (TBSS) and voxel-based morphometry (VBM).


Subject(s)
Alzheimer Disease/diagnostic imaging , Cognitive Dysfunction/diagnostic imaging , Diffusion Magnetic Resonance Imaging , Image Interpretation, Computer-Assisted/methods , Algorithms , Anisotropy , Corpus Callosum/diagnostic imaging , Humans , White Matter/diagnostic imaging
10.
IEEE J Biomed Health Inform ; 23(2): 509-518, 2019 03.
Article in English | MEDLINE | ID: mdl-29994323

ABSTRACT

Lesion segmentation is the first step in most automatic melanoma recognition systems. Deficiencies and difficulties in dermoscopic images such as color inconstancy, hair occlusion, dark corners, and color charts make lesion segmentation an intricate task. In order to detect the lesion in the presence of these problems, we propose a supervised saliency detection method tailored for dermoscopic images based on the discriminative regional feature integration (DRFI). A DRFI method incorporates multilevel segmentation, regional contrast, property, background descriptors, and a random forest regressor to create saliency scores for each region in the image. In our improved saliency detection method, mDRFI, we have added some new features to regional property descriptors. Also, in order to achieve more robust regional background descriptors, a thresholding algorithm is proposed to obtain a new pseudo-background region. Findings reveal that mDRFI is superior to DRFI in detecting the lesion as the salient object in dermoscopic images. The proposed overall lesion segmentation framework uses detected saliency map to construct an initial mask of the lesion through thresholding and postprocessing operations. The initial mask is then evolving in a level set framework to fit better on the lesion's boundaries. The results of evaluation tests on three public datasets show that our proposed segmentation method outperforms the other conventional state-of-the-art segmentation algorithms and its performance is comparable with most recent approaches that are based on deep convolutional neural networks.


Subject(s)
Dermoscopy/methods , Image Interpretation, Computer-Assisted/methods , Skin Neoplasms/diagnostic imaging , Algorithms , Databases, Factual , Humans , Skin/diagnostic imaging , Supervised Machine Learning
11.
Article in English | MEDLINE | ID: mdl-30475705

ABSTRACT

Cardiac magnetic resonance (CMR) images play a growing role in the diagnostic imaging of cardiovascular diseases. Full coverage of the left ventricle (LV), from base to apex, is a basic criterion for CMR image quality and necessary for accurate measurement of cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in the assessment of large imaging cohorts. This paper proposes a novel automatic method for determining LV coverage from CMR images by using Fisher-discriminative three-dimensional (FD3D) convolutional neural networks (CNNs). In contrast to our previous method employing 2D CNNs, this approach utilizes spatial contextual information in CMR volumes, extracts more representative high-level features and enhances the discriminative capacity of the baseline 2D CNN learning framework, thus achieving superior detection accuracy. A two-stage framework is proposed to identify missing basal and apical slices in measurements of CMR volume. First, the FD3D CNN extracts high-level features from the CMR stacks. These image representations are then used to detect the missing basal and apical slices. Compared to the traditional 3D CNN strategy, the proposed FD3D CNN minimizes within-class scatter and maximizes between-class scatter. We performed extensive experiments to validate the proposed method on more than 5,000 independent volumetric CMR scans from the UK Biobank study, achieving low error rates for missing basal/apical slice detection (4.9%/4.6%). The proposed method can also be adopted for assessing LV coverage for other types of CMR image data.

12.
Med Phys ; 2018 Jul 05.
Article in English | MEDLINE | ID: mdl-29974971

ABSTRACT

PURPOSE: This work proposes a new reliable computer-aided diagnostic (CAD) system for the diagnosis of breast cancer from breast ultrasound (BUS) images. The system can be useful to reduce the number of biopsies and pathological tests, which are invasive, costly, and often unnecessary. METHODS: The proposed CAD system classifies breast tumors into benign and malignant classes using morphological and textural features extracted from breast ultrasound (BUS) images. The images are first preprocessed to enhance the edges and filter the speckles. The tumor is then segmented semiautomatically using the watershed method. Having the tumor contour, a set of 855 features including 21 shape-based, 810 contour-based, and 24 textural features are extracted from each tumor. Then, a Bayesian Automatic Relevance Detection (ARD) mechanism is used for computing the discrimination power of different features and dimensionality reduction. Finally, a logistic regression classifier computed the posterior probabilities of malignant vs benign tumors using the reduced set of features. RESULTS: A dataset of 104 BUS images of breast tumors, including 72 benign and 32 malignant tumors, was used for evaluation using an eightfold cross-validation. The algorithm outperformed six state-of-the-art methods for BUS image classification with large margins by achieving 97.12% accuracy, 93.75% sensitivity, and 98.61% specificity rates. CONCLUSIONS: Using ARD, the proposed CAD system selects five new features for breast tumor classification and outperforms state-of-the-art, making a reliable and complementary tool to help clinicians diagnose breast cancer.

13.
IEEE Trans Pattern Anal Mach Intell ; 40(4): 891-904, 2018 04.
Article in English | MEDLINE | ID: mdl-28475045

ABSTRACT

Inferring a probability density function (pdf) for shape from a population of point sets is a challenging problem. The lack of point-to-point correspondences and the non-linearity of the shape spaces undermine the linear models. Methods based on manifolds model the shape variations naturally, however, statistics are often limited to a single geodesic mean and an arbitrary number of variation modes. We relax the manifold assumption and consider a piece-wise linear form, implementing a mixture of distinctive shape classes. The pdf for point sets is defined hierarchically, modeling a mixture of Probabilistic Principal Component Analyzers (PPCA) in higher dimension. A Variational Bayesian approach is designed for unsupervised learning of the posteriors of point set labels, local variation modes, and point correspondences. By maximizing the model evidence, the numbers of clusters, modes of variations, and points on the mean models are automatically selected. Using the predictive distribution, we project a test shape to the spaces spanned by the local PPCA's. The method is applied to point sets from: i) synthetic data, ii) healthy versus pathological heart morphologies, and iii) lumbar vertebrae. The proposed method selects models with expected numbers of clusters and variation modes, achieving lower generalization-specificity errors compared to state-of-the-art.

14.
IEEE J Biomed Health Inform ; 22(2): 503-515, 2018 03.
Article in English | MEDLINE | ID: mdl-28103561

ABSTRACT

Statistical shape modeling is a powerful tool for visualizing and quantifying geometric and functional patterns of the heart. After myocardial infarction (MI), the left ventricle typically remodels in response to physiological challenges. Several methods have been proposed in the literature to describe statistical shape changes. Which method best characterizes left ventricular remodeling after MI is an open research question. A better descriptor of remodeling is expected to provide a more accurate evaluation of disease status in MI patients. We therefore designed a challenge to test shape characterization in MI given a set of three-dimensional left ventricular surface points. The training set comprised 100 MI patients, and 100 asymptomatic volunteers (AV). The challenge was initiated in 2015 at the Statistical Atlases and Computational Models of the Heart workshop, in conjunction with the MICCAI conference. The training set with labels was provided to participants, who were asked to submit the likelihood of MI from a different (validation) set of 200 cases (100 AV and 100 MI). Sensitivity, specificity, accuracy and area under the receiver operating characteristic curve were used as the outcome measures. The goals of this challenge were to (1) establish a common dataset for evaluating statistical shape modeling algorithms in MI, and (2) test whether statistical shape modeling provides additional information characterizing MI patients over standard clinical measures. Eleven groups with a wide variety of classification and feature extraction approaches participated in this challenge. All methods achieved excellent classification results with accuracy ranges from 0.83 to 0.98. The areas under the receiver operating characteristic curves were all above 0.90. Four methods showed significantly higher performance than standard clinical measures. The dataset and software for evaluation are available from the Cardiac Atlas Project website1.

15.
Med Image Anal ; 44: 156-176, 2018 02.
Article in English | MEDLINE | ID: mdl-29248842

ABSTRACT

A probabilistic group-wise similarity registration technique based on Student's t-mixture model (TMM) and a multi-resolution extension of the same (mr-TMM) are proposed in this study, to robustly align shapes and establish valid correspondences, for the purpose of training statistical shape models (SSMs). Shape analysis across large cohorts requires automatic generation of the requisite training sets. Automated segmentation and landmarking of medical images often result in shapes with varying proportions of outliers and consequently require a robust method of alignment and correspondence estimation. Both TMM and mrTMM are validated by comparison with state-of-the-art registration algorithms based on Gaussian mixture models (GMMs), using both synthetic and clinical data. Four clinical data sets are used for validation: (a) 2D femoral heads (K= 1000 samples generated from DXA images of healthy subjects); (b) control-hippocampi (K= 50 samples generated from T1-weighted magnetic resonance (MR) images of healthy subjects); (c) MCI-hippocampi (K= 28 samples generated from MR images of patients diagnosed with mild cognitive impairment); and (d) heart shapes comprising left and right ventricular endocardium and epicardium (K= 30 samples generated from short-axis MR images of: 10 healthy subjects, 10 patients diagnosed with pulmonary hypertension and 10 diagnosed with hypertrophic cardiomyopathy). The proposed methods significantly outperformed the state-of-the-art in terms of registration accuracy in the experiments involving synthetic data, with mrTMM offering significant improvement over TMM. With the clinical data, both methods performed comparably to the state-of-the-art for the hippocampi and heart data sets, which contained few outliers. They outperformed the state-of-the-art for the femur data set, containing large proportions of outliers, in terms of alignment accuracy, and the quality of SSMs trained, quantified in terms of generalization, compactness and specificity.


Subject(s)
Absorptiometry, Photon , Algorithms , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Cardiomyopathy, Hypertrophic/diagnostic imaging , Femur Head/diagnostic imaging , Heart/diagnostic imaging , Hippocampus/diagnostic imaging , Humans , Hypertension, Pulmonary/diagnostic imaging , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
16.
J Med Imaging (Bellingham) ; 4(3): 034006, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28924571

ABSTRACT

Extraction of blood vessels in retinal images is an important step for computer-aided diagnosis of ophthalmic pathologies. We propose an approach for blood vessel tracking and diameter estimation. We hypothesize that the curvature and the diameter of blood vessels are Gaussian processes (GPs). Local Radon transform, which is robust against noise, is subsequently used to compute the features and train the GPs. By learning the kernelized covariance matrix from training data, vessel direction and its diameter are estimated. In order to detect bifurcations, multiple GPs are used and the difference between their corresponding predicted directions is quantified. The combination of Radon features and GP results in a good performance in the presence of noise. The proposed method successfully deals with typically difficult cases such as bifurcations and central arterial reflex, and also tracks thin vessels with high accuracy. Experiments are conducted on the publicly available DRIVE, STARE, CHASEDB1, and high-resolution fundus databases evaluating sensitivity, specificity, and Matthew's correlation coefficient (MCC). Experimental results on these datasets show that the proposed method reaches an average sensitivity of 75.67%, specificity of 97.46%, and MCC of 72.18% which is comparable to the state-of-the-art.

17.
Comput Methods Programs Biomed ; 151: 139-149, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28946995

ABSTRACT

BACKGROUND AND OBJECTIVE: Retinal vascular tree extraction plays an important role in computer-aided diagnosis and surgical operations. Junction point detection and classification provide useful information about the structure of the vascular network, facilitating objective analysis of retinal diseases. METHODS: In this study, we present a new machine learning algorithm for joint classification and tracking of retinal blood vessels. Our method is based on a hierarchical probabilistic framework, where the local intensity cross sections are classified as either junction or vessel points. Gaussian basis functions are used for intensity interpolation, and the corresponding linear coefficients are assumed to be samples from class-specific Gamma distributions. Hence, a directed Probabilistic Graphical Model (PGM) is proposed and the hyperparameters are estimated using a Maximum Likelihood (ML) solution based on Laplace approximation. RESULTS: The performance of proposed method is evaluated using precision and recall rates on the REVIEW database. Our experiments show the proposed approach reaches promising results in bifurcation point detection and classification, achieving 88.67% precision and 88.67% recall rates. CONCLUSIONS: This technique results in a classifier with high precision and recall when comparing it with Xu's method.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted , Machine Learning , Retinal Vessels/diagnostic imaging , Humans , Likelihood Functions , Models, Statistical , Normal Distribution
18.
Med Phys ; 44(7): 3615-3629, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28409834

ABSTRACT

PURPOSE: The aim of this study was to develop a novel technique for lung nodule detection using an optimized feature set. This feature set has been achieved after rigorous experimentation, which has helped in reducing the false positives significantly. METHOD: The proposed method starts with preprocessing, removing any present noise from input images, followed by lung segmentation using optimal thresholding. Then the image is enhanced using multiscale dot enhancement filtering prior to nodule detection and feature extraction. Finally, classification of lung nodules is achieved using Support Vector Machine (SVM) classifier. The feature set consists of intensity, shape (2D and 3D) and texture features, which have been selected to optimize the sensitivity and reduce false positives. In addition to SVM, some other supervised classifiers like K-Nearest-Neighbor (KNN), Decision Tree and Linear Discriminant Analysis (LDA) have also been used for performance comparison. The extracted features have also been compared class-wise to determine the most relevant features for lung nodule detection. The proposed system has been evaluated using 850 scans from Lung Image Database Consortium (LIDC) dataset and k-fold cross-validation scheme. RESULTS: The overall sensitivity has been improved compared to the previous methods and false positives per scan have been reduced significantly. The achieved sensitivities at detection and classification stages are 94.20% and 98.15%, respectively, with only 2.19 false positives per scan. CONCLUSIONS: It is very difficult to achieve high performance metrics using only a single feature class therefore hybrid approach in feature selection remains a better choice. Choosing right set of features can improve the overall accuracy of the system by improving the sensitivity and reducing false positives.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted , Solitary Pulmonary Nodule/diagnostic imaging , Support Vector Machine , Tomography, X-Ray Computed , Humans , Lung Neoplasms , Radiographic Image Interpretation, Computer-Assisted , Sensitivity and Specificity
19.
Med Image Anal ; 33: 27-32, 2016 10.
Article in English | MEDLINE | ID: mdl-27373145

ABSTRACT

Medical image analysis has grown into a matured field challenged by progress made across all medical imaging technologies and more recent breakthroughs in biological imaging. The cross-fertilisation between medical image analysis, biomedical imaging physics and technology, and domain knowledge from medicine and biology has spurred a truly interdisciplinary effort that stretched outside the original boundaries of the disciplines that gave birth to this field and created stimulating and enriching synergies. Consideration on how the field has evolved and the experience of the work carried out over the last 15 years in our centre, has led us to envision a future emphasis of medical imaging in Precision Imaging. Precision Imaging is not a new discipline but rather a distinct emphasis in medical imaging borne at the cross-roads between, and unifying the efforts behind mechanistic and phenomenological model-based imaging. It captures three main directions in the effort to deal with the information deluge in imaging sciences, and thus achieve wisdom from data, information, and knowledge. Precision Imaging is finally characterised by being descriptive, predictive and integrative about the imaged object. This paper provides a brief and personal perspective on how the field has evolved, summarises and formalises our vision of Precision Imaging for Precision Medicine, and highlights some connections with past research and current trends in the field.


Subject(s)
Diagnostic Imaging , Precision Medicine , Algorithms , Animals , Diagnostic Imaging/trends , Humans , Precision Medicine/trends
20.
Med Image Anal ; 32: 46-68, 2016 08.
Article in English | MEDLINE | ID: mdl-27054277

ABSTRACT

Despite continuous progress in X-ray angiography systems, X-ray coronary angiography is fundamentally limited by its 2D representation of moving coronary arterial trees, which can negatively impact assessment of coronary artery disease and guidance of percutaneous coronary intervention. To provide clinicians with 3D/3D+time information of coronary arteries, methods computing reconstructions of coronary arteries from X-ray angiography are required. Because of several aspects (e.g. cardiac and respiratory motion, type of X-ray system), reconstruction from X-ray coronary angiography has led to vast amount of research and it still remains as a challenging and dynamic research area. In this paper, we review the state-of-the-art approaches on reconstruction of high-contrast coronary arteries from X-ray angiography. We mainly focus on the theoretical features in model-based (modelling) and tomographic reconstruction of coronary arteries, and discuss the evaluation strategies. We also discuss the potential role of reconstructions in clinical decision making and interventional guidance, and highlight areas for future research.


Subject(s)
Algorithms , Coronary Angiography/methods , Coronary Vessels/anatomy & histology , Coronary Vessels/diagnostic imaging , Clinical Decision-Making , Coronary Angiography/trends , Humans , Image Interpretation, Computer-Assisted/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...