Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Med Image Anal ; 91: 103034, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37984127

ABSTRACT

Statistical shape modeling (SSM) characterizes anatomical variations in a population of shapes generated from medical images. Statistical analysis of shapes requires consistent shape representation across samples in shape cohort. Establishing this representation entails a processing pipeline that includes anatomy segmentation, image re-sampling, shape-based registration, and non-linear, iterative optimization. These shape representations are then used to extract low-dimensional shape descriptors that are anatomically relevant to facilitate subsequent statistical analyses in different applications. However, the current process of obtaining these shape descriptors from imaging data relies on human and computational resources, requiring domain expertise for segmenting anatomies of interest. Moreover, this same taxing pipeline needs to be repeated to infer shape descriptors for new image data using a pre-trained/existing shape model. Here, we propose DeepSSM, a deep learning-based framework for learning the functional mapping from images to low-dimensional shape descriptors and their associated shape representations, thereby inferring statistical representation of anatomy directly from 3D images. Once trained using an existing shape model, DeepSSM circumvents the heavy and manual pre-processing and segmentation required by classical models and significantly improves the computational time, making it a viable solution for fully end-to-end shape modeling applications. In addition, we introduce a model-based data-augmentation strategy to address data scarcity, a typical scenario in shape modeling applications. Finally, this paper presents and analyzes two different architectural variants of DeepSSM with different loss functions using three medical datasets and their downstream clinical application. Experiments showcase that DeepSSM performs comparably or better to the state-of-the-art SSM both quantitatively and on application-driven downstream tasks. Therefore, DeepSSM aims to provide a comprehensive blueprint for deep learning-based image-to-shape models.


Subject(s)
Deep Learning , Humans , Imaging, Three-Dimensional/methods , Models, Statistical , Image Processing, Computer-Assisted/methods
2.
Med Image Anal ; 73: 102157, 2021 10.
Article in English | MEDLINE | ID: mdl-34293535

ABSTRACT

In current biological and medical research, statistical shape modeling (SSM) provides an essential framework for the characterization of anatomy/morphology. Such analysis is often driven by the identification of a relatively small number of geometrically consistent features found across the samples of a population. These features can subsequently provide information about the population shape variation. Dense correspondence models can provide ease of computation and yield an interpretable low-dimensional shape descriptor when followed by dimensionality reduction. However, automatic methods for obtaining such correspondences usually require image segmentation followed by significant preprocessing, which is taxing in terms of both computation as well as human resources. In many cases, the segmentation and subsequent processing require manual guidance and anatomy specific domain expertise. This paper proposes a self-supervised deep learning approach for discovering landmarks from images that can directly be used as a shape descriptor for subsequent analysis. We use landmark-driven image registration as the primary task to force the neural network to discover landmarks that register the images well. We also propose a regularization term that allows for robust optimization of the neural network and ensures that the landmarks uniformly span the image domain. The proposed method circumvents segmentation and preprocessing and directly produces a usable shape descriptor using just 2D or 3D images. In addition, we also propose two variants on the training loss function that allows for prior shape information to be integrated into the model. We apply this framework on several 2D and 3D datasets to obtain their shape descriptors. We analyze these shape descriptors in their efficacy of capturing shape information by performing different shape-driven applications depending on the data ranging from shape clustering to severity prediction to outcome diagnosis.


Subject(s)
Imaging, Three-Dimensional , Models, Statistical , Humans , Neural Networks, Computer
3.
J Craniofac Surg ; 31(3): 697-701, 2020.
Article in English | MEDLINE | ID: mdl-32011542

ABSTRACT

The standard for diagnosing metopic craniosynostosis (CS) utilizes computed tomography (CT) imaging and physical exam, but there is no standardized method for determining disease severity. Previous studies using interfrontal angles have evaluated differences in specific skull landmarks; however, these measurements are difficult to readily ascertain in clinical practice and fail to assess the complete skull contour. This pilot project employs machine learning algorithms to combine statistical shape information with expert ratings to generate a novel objective method of measuring the severity of metopic CS.Expert ratings of normal and metopic skull CT images were collected. Skull-shape analysis was conducted using ShapeWorks software. Machine-learning was used to combine the expert ratings with our shape analysis model to predict the severity of metopic CS using CT images. Our model was then compared to the gold standard using interfrontal angles.Seventeen metopic skull CT images of patients 5 to 15 months old were assigned a severity by 18 craniofacial surgeons, and 65 nonaffected controls were included with a 0 severity. Our model accurately correlated the level of skull deformity with severity (P < 0.10) and predicted the severity of metopic CS more often than models using interfrontal angles (χ = 5.46, P = 0.019).This is the first study that combines shape information with expert ratings to generate an objective measure of severity for metopic CS. This method may help clinicians easily quantify the severity and perform robust longitudinal assessments of the condition.


Subject(s)
Craniosynostoses/diagnostic imaging , Face/diagnostic imaging , Skull/diagnostic imaging , Craniosynostoses/surgery , Face/surgery , Humans , Infant , Machine Learning , Pilot Projects , Skull/surgery , Tomography, X-Ray Computed
4.
Med Image Comput Comput Assist Interv ; 12264: 627-638, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33778817

ABSTRACT

Statistical shape analysis is a very useful tool in a wide range of medical and biological applications. However, it typically relies on the ability to produce a relatively small number of features that can capture the relevant variability in a population. State-of-the-art methods for obtaining such anatomical features rely on either extensive preprocessing or segmentation and/or significant tuning and post-processing. These shortcomings limit the widespread use of shape statistics. We propose that effective shape representations should provide sufficient information to align/register images. Using this assumption we propose a self-supervised, neural network approach for automatically positioning and detecting landmarks in images that can be used for subsequent analysis. The network discovers the landmarks corresponding to anatomical shape features that promote good image registration in the context of a particular class of transformations. In addition, we also propose a regularization for the proposed network which allows for a uniform distribution of these discovered landmarks. In this paper, we present a complete framework, which only takes a set of input images and produces landmarks that are immediately usable for statistical shape analysis. We evaluate the performance on a phantom dataset as well as 2D and 3D images.

5.
Comput Vis ACCV ; 12625: 643-660, 2020.
Article in English | MEDLINE | ID: mdl-33778815

ABSTRACT

Unsupervised representation learning via generative modeling is a staple to many computer vision applications in the absence of labeled data. Variational Autoencoders (VAEs) are powerful generative models that learn representations useful for data generation. However, due to inherent challenges in the training objective, VAEs fail to learn useful representations amenable for downstream tasks. Regularization-based methods that attempt to improve the representation learning aspect of VAEs come at a price: poor sample generation. In this paper, we explore this representation-generation trade-off for regularized VAEs and introduce a new family of priors, namely decoupled priors, or dpVAEs, that decouple the representation space from the generation space. This decoupling enables the use of VAE regularizers on the representation space without impacting the distribution used for sample generation, and thereby reaping the representation learning benefits of the regularizations without sacrificing the sample generation. dpVAE leverages invertible networks to learn a bijective mapping from an arbitrarily complex representation distribution to a simple, tractable, generative distribution. Decoupled priors can be adapted to the state-of-the-art VAE regularizers without additional hyperparameter tuning. We showcase the use of dpVAEs with different regularizers. Experiments on MNIST, SVHN, and CelebA demonstrate, quantitatively and qualitatively, that dpVAE fixes sample generation for regularized VAEs.

6.
Shape Med Imaging (2020) ; 12474: 57-72, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33817703

ABSTRACT

Statistical shape modeling (SSM) has recently taken advantage of advances in deep learning to alleviate the need for a time-consuming and expert-driven workflow of anatomy segmentation, shape registration, and the optimization of population-level shape representations. DeepSSM is an end-to-end deep learning approach that extracts statistical shape representation directly from unsegmented images with little manual overhead. It performs comparably with state-of-the-art shape modeling methods for estimating morphologies that are viable for subsequent downstream tasks. Nonetheless, DeepSSM produces an overconfident estimate of shape that cannot be blindly assumed to be accurate. Hence, conveying what DeepSSM does not know, via quantifying granular estimates of uncertainty, is critical for its direct clinical application as an on-demand diagnostic tool to determine how trustworthy the model output is. Here, we propose Uncertain-DeepSSM as a unified model that quantifies both, data-dependent aleatoric uncertainty by adapting the network to predict intrinsic input variance, and model-dependent epistemic uncertainty via a Monte Carlo dropout sampling to approximate a variational distribution over the network parameters. Experiments show an accuracy improvement over DeepSSM while maintaining the same benefits of being end-to-end with little pre-processing.

7.
Article in English | MEDLINE | ID: mdl-32632370

ABSTRACT

Evidence suggests that the shape of left atrium appendages (LAA) is a primary indicator in predicting stroke for patients diagnosed with atrial fibrillation (AF). Statistical shape modeling tools used to represent (i.e., parameterize) the underlying LAA variability are of crucial importance to learn shape-based predictors of stroke. Most shape modeling techniques use some form of alignment either as a data pre-processing step or during the modeling step. However, the LAA is a joint anatomy along with left atrium (LA), and the relative position and alignment plays a crucial part in determining risk of stroke. In this paper, we explore different alignment strategies for statistical shape modeling and how each strategy affects the stroke prediction capability. This allows for identifying a unified approach of alignment while analyzing the LAA anatomy for stroke. Here, we study three different alignment strategies, (i) global alignment, (ii) global translational alignment and (iii) cluster based alignment. Our results show that alignment strategies that take into account LAA orientation, i.e., (ii), or the inherent natural clustering of the population under study, i.e., (iii), provide significant improvement over global alignment in both qualitative as well as quantitative measures.

8.
Article in English | MEDLINE | ID: mdl-32632371

ABSTRACT

Functional measurements of the left atrium (LA) in atrial fibrillation (AF) patients is limited to a single CINE slice midway through the LA. Nonetheless, a full 3D characterization of atrial functional measurements would provide more insights into LA function. But this improved modeling capacity comes at a price of requiring LA segmentation of each 3D time point,a time-consuming and expensive task that requires anatomy-specific expertise.We propose an efficient pipeline which requires ground truth segmentation of a single (or limited) CINE time point to accurately propagate it throughout the sequence. This method significantly saves human effort and enable better characterization of LA anatomy. From a gated cardiac CINE MRI sequence we select a single CINE time point with ground truth segmentation, and assuming cyclic motion, we register other images corresponding to all time points using diffeomorphic registration in ANTs. The diffeomorphic registration fields allow us to map a given anatomical shape (segmentation) to each CINE time point, facilitating the construction of a 4D shape model.

9.
Article in English | MEDLINE | ID: mdl-32632372

ABSTRACT

Left atrial appendage (LAA) closure is performed in atrial fibrillation (AF) patients to help prevent stroke. LAA closure using an occlusion implant is performed under imaging guidance. However, occlusion can be a complicated process due to the highly variable and heterogeneous LAA shapes across patients. Patient-specific implant selection and insertion processes are keys to the success of the procedure, yet subjective in nature. A population study of the angle of entry at the interatrial septum relative to the appendage can assist in both catheter design and patient-specific implant choice. In our population study, we analyzed the inherent clusters of the angles that were obtained between the septum normal and the LAA ostium plane. The number of inherent angle clusters matched the LAA four morphological classifications reported in the literature. Further, our exploratory analysis revealed that the normal from the ostium plane does not intersect the septum in all the samples under study. The insights gained from this study can help assist in making objective decisions during LAA closure.

10.
Med Image Comput Comput Assist Interv ; 11765: 391-400, 2019 Oct.
Article in English | MEDLINE | ID: mdl-32803194

ABSTRACT

Spatial transformations are enablers in a variety of medical image analysis applications that entail aligning images to a common coordinate systems. Population analysis of such transformations is expected to capture the underlying image and shape variations, and hence these transformations are required to produce anatomically feasible correspondences. This is usually enforced through some smoothness-based generic metric or regularization of the deformation field. Alternatively, population-based regularization has been shown to produce anatomically accurate correspondences in cases where anatomically unaware (i.e., data independent) regularization fail. Recently, deep networks have been used to generate spatial transformations in an unsupervised manner, and, once trained, these networks are computationally faster and as accurate as conventional, optimization-based registration methods. However, the deformation fields produced by these networks require smoothness penalties, just as the conventional registration methods, and ignores population-level statistics of the transformations. Here, we propose a novel neural network architecture that simultaneously learns and uses the population-level statistics of the spatial transformations to regularize the neural networks for unsupervised image registration. This regularization is in the form of a bottleneck autoencoder, which learns and adapts to the population of transformations required to align input images by encoding the transformations to a low dimensional manifold. The proposed architecture produces deformation fields that describe the population-level features and associated correspondences in an anatomically relevant manner and are statistically compact relative to the state-of-the-art approaches while maintaining computational efficiency. We demonstrate the efficacy of the proposed architecture on synthetic data sets, as well as 2D and 3D medical data.

11.
Shape Med Imaging (2018) ; 11167: 244-257, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30805572

ABSTRACT

Statistical shape modeling is an important tool to characterize variation in anatomical morphology. Typical shapes of interest are measured using 3D imaging and a subsequent pipeline of registration, segmentation, and some extraction of shape features or projections onto some lower-dimensional shape space, which facilitates subsequent statistical analysis. Many methods for constructing compact shape representations have been proposed, but are often impractical due to the sequence of image preprocessing operations, which involve significant parameter tuning, manual delineation, and/or quality control by the users. We propose DeepSSM: a deep learning approach to extract a low-dimensional shape representation directly from 3D images, requiring virtually no parameter tuning or user assistance. DeepSSM uses a convolutional neural network (CNN) that simultaneously localizes the biological structure of interest, establishes correspondences, and projects these points onto a low-dimensional shape representation in the form of PCA loadings within a point distribution model. To overcome the challenge of the limited availability of training images with dense correspondences, we present a novel data augmentation procedure that uses existing correspondences on a relatively small set of processed images with shape statistics to create plausible training samples with known shape parameters. In this way, we leverage the limited CT/MRI scans (40-50) into thousands of images needed to train a deep neural net. After the training, the CNN automatically produces accurate low-dimensional shape representations for unseen images. We validate DeepSSM for three different applications pertaining to modeling pediatric cranial CT for characterization of metopic craniosynostosis, femur CT scans identifying morphologic deformities of the hip due to femoroacetabular impingement, and left atrium MRI scans for atrial fibrillation recurrence prediction.

SELECTION OF CITATIONS
SEARCH DETAIL
...