Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 87: 102792, 2023 07.
Article in English | MEDLINE | ID: mdl-37054649

ABSTRACT

Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_training.


Subject(s)
Heart , Pelvis , Male , Humans , Prostate , Semantics , Supervised Machine Learning , Image Processing, Computer-Assisted
2.
Sci Rep ; 12(1): 12405, 2022 07 20.
Article in English | MEDLINE | ID: mdl-35859092

ABSTRACT

Live fluorescence imaging has demonstrated the dynamic nature of dendritic spines, with changes in shape occurring both during development and in response to activity. The structure of a dendritic spine correlates with its functional efficacy. Learning and memory studies have shown that a great deal of the information stored by a neuron is contained in the synapses. High precision tracking of synaptic structures can give hints about the dynamic nature of memory and help us understand how memories evolve both in biological and artificial neural networks. Experiments that aim to investigate the dynamics behind the structural changes of dendritic spines require the collection and analysis of large time-series datasets. In this paper, we present an open-source software called SpineS for automatic longitudinal structural analysis of dendritic spines with additional features for manual intervention to ensure optimal analysis. We have tested the algorithm on in-vitro, in-vivo, and simulated datasets to demonstrate its performance in a wide range of possible experimental scenarios.


Subject(s)
Dendritic Spines , Software , Algorithms , Dendritic Spines/physiology , Synapses/physiology , Time Factors
3.
Med Image Anal ; 68: 101934, 2021 02.
Article in English | MEDLINE | ID: mdl-33385699

ABSTRACT

Supervised learning-based segmentation methods typically require a large number of annotated training data to generalize well at test time. In medical applications, curating such datasets is not a favourable option because acquiring a large number of annotated samples from experts is time-consuming and expensive. Consequently, numerous methods have been proposed in the literature for learning with limited annotated examples. Unfortunately, the proposed approaches in the literature have not yet yielded significant gains over random data augmentation for image segmentation, where random augmentations themselves do not yield high accuracy. In this work, we propose a novel task-driven data augmentation method for learning with limited labeled data where the synthetic data generator, is optimized for the segmentation task. The generator of the proposed method models intensity and shape variations using two sets of transformations, as additive intensity transformations and deformation fields. Both transformations are optimized using labeled as well as unlabeled examples in a semi-supervised framework. Our experiments on three medical datasets, namely cardiac, prostate and pancreas, show that the proposed approach significantly outperforms standard augmentation and semi-supervised approaches for image segmentation in the limited annotation setting. The code is made publicly available at https://github.com/krishnabits001/task_driven_data_augmentation.


Subject(s)
Prostate , Supervised Machine Learning , Humans , Male
4.
Med Image Anal ; 68: 101907, 2021 02.
Article in English | MEDLINE | ID: mdl-33341496

ABSTRACT

Convolutional Neural Networks (CNNs) work very well for supervised learning problems when the training dataset is representative of the variations expected to be encountered at test time. In medical image segmentation, this premise is violated when there is a mismatch between training and test images in terms of their acquisition details, such as the scanner model or the protocol. Remarkable performance degradation of CNNs in this scenario is well documented in the literature. To address this problem, we design the segmentation CNN as a concatenation of two sub-networks: a relatively shallow image normalization CNN, followed by a deep CNN that segments the normalized image. We train both these sub-networks using a training dataset, consisting of annotated images from a particular scanner and protocol setting. Now, at test time, we adapt the image normalization sub-network for each test image, guided by an implicit prior on the predicted segmentation labels. We employ an independently trained denoising autoencoder (DAE) in order to model such an implicit prior on plausible anatomical segmentation labels. We validate the proposed idea on multi-center Magnetic Resonance imaging datasets of three anatomies: brain, heart and prostate. The proposed test-time adaptation consistently provides performance improvement, demonstrating the promise and generality of the approach. Being agnostic to the architecture of the deep CNN, the second sub-network, the proposed design can be utilized with any segmentation network to increase robustness to variations in imaging scanners and protocols. Our code is available at: https://github.com/neerakara/test-time-adaptable-neural-networks-for-domain-generalization.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Brain/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Prostate
5.
IEEE Trans Image Process ; 28(11): 5702-5715, 2019 Nov.
Article in English | MEDLINE | ID: mdl-31217112

ABSTRACT

Segmenting images of low quality or with missing data is a challenging problem. In such scenarios, exploiting statistical prior information about the shapes to be segmented can improve the segmentation results significantly. Incorporating prior density of shapes into a Bayesian framework leads to the posterior density of segmenting shapes given the observed data. Most segmentation algorithms that exploit shape priors optimize a cost function based on the posterior density and find a point estimate (e.g., using maximum a posteriori estimation). However, especially when the prior shape density is multimodal leading to a multimodal posterior density, a point estimate does not provide a measure of the degree of confidence in that result, neither does it provide a picture of other probable solutions based on the observed data and the shape priors. With a statistical view, addressing these issues would involve the problem of characterizing the posterior distributions of the shapes of the objects to be segmented. An analytic computation of such posterior distributions is intractable; however, characterization is still possible through their samples. In this paper, we propose an efficient pseudo-marginal Markov chain Monte Carlo (MCMC) sampling approach to draw samples from posterior shape distributions for image segmentation. The computation time of the proposed approach is independent from the training set size. Therefore, it scales well for very large data sets. In addition to better characterization of the statistical structure of the problem, such an approach has the potential to address issues with getting stuck at local optima, suffered by existing shape-based segmentation methods. Our approach is able to characterize the posterior probability density in the space of shapes through its samples, and to return multiple solutions, potentially from different modes of a multimodal probability density, which would be encountered, e.g., in segmenting objects from multiple shape classes. We present promising results on a variety of synthetic and real data sets.

6.
Neuroscience ; 394: 189-205, 2018 12 01.
Article in English | MEDLINE | ID: mdl-30347279

ABSTRACT

Detecting morphological changes of dendritic spines in time-lapse microscopy images and correlating them with functional properties such as memory and learning, are fundamental and challenging problems in neurobiology research. In this paper, we propose an algorithm for dendritic spine detection in time series. The proposed approach initially performs spine detection at each time point and improves the accuracy by exploiting the information obtained from tracking of individual spines over time. To detect dendritic spines in a time point image we employ an SVM classifier trained by pre-labeled SIFT feature descriptors in combination with a dot enhancement filter. Second, to track the growth or loss of spines, we apply a SIFT-based rigid registration method for the alignment of time-series images. This step takes into account both the structure and the movement of objects, combined with a robust dynamic scheme to link information about spines that disappear and reappear over time. Next, we improve spine detection by employing a probabilistic dynamic programming approach to search for an optimum solution to accurately detect missed spines. Finally, we determine the spine location more precisely by performing a watershed-geodesic active contour model. We quantitatively assess the performance of the proposed spine detection algorithm based on annotations performed by biologists and compare its performance with the results obtained by the noncommercial software NeuronIQ. Experiments show that our approach can accurately detect and quantify spines in 2-photon microscopy time-lapse data and is able to accurately identify spine elimination and formation.


Subject(s)
Dendritic Spines/physiology , Image Enhancement/methods , Microscopy/methods , Algorithms , Animals , Hippocampus/cytology , Mice , Pattern Recognition, Automated , Support Vector Machine
7.
IEEE Trans Med Imaging ; 37(1): 293-305, 2018 01.
Article in English | MEDLINE | ID: mdl-28961107

ABSTRACT

The use of appearance and shape priors in image segmentation is known to improve accuracy; however, existing techniques have several drawbacks. For instance, most active shape and appearance models require landmark points and assume unimodal shape and appearance distributions, and the level set representation does not support construction of local priors. In this paper, we present novel appearance and shape models for image segmentation based on a differentiable implicit parametric shape representation called a disjunctive normal shape model (DNSM). The DNSM is formed by the disjunction of polytopes, which themselves are formed by the conjunctions of half-spaces. The DNSM's parametric nature allows the use of powerful local prior statistics, and its implicit nature removes the need to use landmarks and easily handles topological changes. In a Bayesian inference framework, we model arbitrary shape and appearance distributions using nonparametric density estimations, at any local scale. The proposed local shape prior results in accurate segmentation even when very few training shapes are available, because the method generates a rich set of shape variations by locally combining training samples. We demonstrate the performance of the framework by applying it to both 2-D and 3-D data sets with emphasis on biomedical image segmentation applications.


Subject(s)
Algorithms , Bayes Theorem , Image Processing, Computer-Assisted/methods , Brain/diagnostic imaging , Humans , Magnetic Resonance Imaging , Male , Models, Theoretical , Prostate/diagnostic imaging , Spine/diagnostic imaging , Walking/physiology
8.
IEEE Trans Image Process ; 26(11): 5312-5323, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28727552

ABSTRACT

In many image segmentation problems involving limited and low-quality data, employing statistical prior information about the shapes of the objects to be segmented can significantly improve the segmentation result. However, defining probability densities in the space of shapes is an open and challenging problem, especially if the object to be segmented comes from a shape density involving multiple modes (classes). Existing techniques in the literature estimate the underlying shape distribution by extending Parzen density estimator to the space of shapes. In these methods, the evolving curve may converge to a shape from a wrong mode of the posterior density when the observed intensities provide very little information about the object boundaries. In such scenarios, employing both shape- and class-dependent discriminative feature priors can aid the segmentation process. Such features may involve, e.g., intensity-based, textural, or geometric information about the objects to be segmented. In this paper, we propose a segmentation algorithm that uses nonparametric joint shape and feature priors constructed by Parzen density estimation. We incorporate the learned joint shape and feature prior distribution into a maximum a posteriori estimation framework for segmentation. The resulting optimization problem is solved using active contours. We present experimental results on a variety of synthetic and real data sets from several fields involving multimodal shape densities. Experimental results demonstrate the potential of the proposed method.

9.
Bioinformatics ; 26(20): 2645-6, 2010 Oct 15.
Article in English | MEDLINE | ID: mdl-20736341

ABSTRACT

MOTIVATION: Clustering methods including k-means, SOM, UPGMA, DAA, CLICK, GENECLUSTER, CAST, DHC, PMETIS and KMETIS have been widely used in biological studies for gene expression, protein localization, sequence recognition and more. All these clustering methods have some benefits and drawbacks. We propose a novel graph-based clustering software called COMUSA for combining the benefits of a collection of clusterings into a final clustering having better overall quality. RESULTS: COMUSA implementation is compared with PMETIS, KMETIS and k-means. Experimental results on artificial, real and biological datasets demonstrate the effectiveness of our method. COMUSA produces very good quality clusters in a short amount of time. AVAILABILITY: http://www.cs.umb.edu/∼smimarog/comusa CONTACT: selim.mimaroglu@bahcesehir.edu.tr


Subject(s)
Cluster Analysis , Gene Expression Profiling/methods , Algorithms , Proteins/analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...