Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Article in English | MEDLINE | ID: mdl-38753477

ABSTRACT

Scene Graph Generation (SGG) has achieved significant progress recently. However, most previous works rely heavily on fixed-size entity representations based on bounding box proposals, anchors, or learnable queries. As each representation's cardinality has different trade-offs between performance and computation overhead, extracting highly representative features efficiently and dynamically is both challenging and crucial for SGG. In this work, a novel architecture called RepSGG is proposed to address the aforementioned challenges, formulating a subject as queries, an object as keys, and their relationship as the maximum attention weight between pairwise queries and keys. With more fine-grained and flexible representation power for entities and relationships, RepSGG learns to sample semantically discriminative and representative points for relationship inference. Moreover, the long-tailed distribution also poses a significant challenge for generalization of SGG. A run-time performance-guided logit adjustment (PGLA) strategy is proposed such that the relationship logits are modified via affine transformations based on run-time performance during training. This strategy encourages a more balanced performance between dominant and rare classes. Experimental results show that RepSGG achieves the state-of-the-art or comparable performance on the Visual Genome and Open Images V6 datasets with fast inference speed, demonstrating the efficacy and efficiency of the proposed methods.

2.
Sci Rep ; 14(1): 4489, 2024 02 24.
Article in English | MEDLINE | ID: mdl-38396157

ABSTRACT

Many critical issues arise when training deep neural networks using limited biological datasets. These include overfitting, exploding/vanishing gradients and other inefficiencies which are exacerbated by class imbalances and can affect the overall accuracy of a model. There is a need to develop semi-supervised models that can reduce the need for large, balanced, manually annotated datasets so that researchers can easily employ neural networks for experimental analysis. In this work, Iterative Pseudo Balancing (IPB) is introduced to classify stem cell microscopy images while performing on the fly dataset balancing using a student-teacher meta-pseudo-label framework. In addition, multi-scale patches of multi-label images are incorporated into the network training to provide previously inaccessible image features with both local and global information for effective and efficient learning. The combination of these inputs is shown to increase the classification accuracy of the proposed deep neural network by 3[Formula: see text] over baseline, which is determined to be statistically significant. This work represents a novel use of pseudo-labeling for data limited settings, which are common in biological image datasets, and highlights the importance of the exhaustive use of available image features for improving performance of semi-supervised networks. The proposed methods can be used to reduce the need for expensive manual dataset annotation and in turn accelerate the pace of scientific research involving non-invasive cellular imaging.


Subject(s)
Learning , Microscopy , Humans , Neural Networks, Computer , Product Labeling , Stem Cells , Image Processing, Computer-Assisted , Supervised Machine Learning
3.
IEEE/ACM Trans Comput Biol Bioinform ; 20(3): 2314-2327, 2023.
Article in English | MEDLINE | ID: mdl-37027755

ABSTRACT

Cellular microscopy imaging is a common form of data acquisition for biological experimentation. Observation of gray-level morphological features allows for the inference of useful biological information such as cellular health and growth status. Cellular colonies can contain multiple cell types, making colony level classification very difficult. Additionally, cell types growing in a hierarchical, downstream fashion, can often look visually similar, although biologically distinct. In this paper, it is determined empirically that traditional deep Convolutional Neural Networks (CNN) and classical object recognition techniques are not sufficient to distinguish between these subtle visual differences, resulting in misclassifications. Instead, Triplet-net CNN learning is employed in a hierarchical classification scheme to improve the ability of the model to discern distinct, fine-grain features of two commonly confused morphological image-patch classes, namely Dense and Spread colonies. The Triplet-net method improves classification accuracy over a four-class deep neural network by  âˆ¼  3 %, a value that was determined to be statistically significant, as well as existing state-of-the-art image patch classification approaches and standard template matching. These findings allow for the accurate classification of multi-class cell colonies with contiguous boundaries, and increased reliability and efficiency of automated, high-throughput experimental quantification using non-invasive microscopy.


Subject(s)
Microscopy , Neural Networks, Computer , Reproducibility of Results , Stem Cells
4.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9503-9520, 2022 12.
Article in English | MEDLINE | ID: mdl-34748482

ABSTRACT

Deep learning models have been shown to be vulnerable to adversarial attacks. Adversarial attacks are imperceptible perturbations added to an image such that the deep learning model misclassifies the image with a high confidence. Existing adversarial defenses validate their performance using only the classification accuracy. However, classification accuracy by itself is not a reliable metric to determine if the resulting image is "adversarial-free". This is a foundational problem for online image recognition applications where the ground-truth of the incoming image is not known and hence we cannot compute the accuracy of the classifier or validate if the image is "adversarial-free" or not. This paper proposes a novel privacy preserving framework for defending Black box classifiers from adversarial attacks using an ensemble of iterative adversarial image purifiers whose performance is continuously validated in a loop using Bayesian uncertainties. The proposed approach can convert a single-step black box adversarial defense into an iterative defense and proposes three novel privacy preserving Knowledge Distillation (KD) approaches that use prior meta-information from various datasets to mimic the performance of the Black box classifier. Additionally, this paper proves the existence of an optimal distribution for the purified images that can reach a theoretical lower bound, beyond which the image can no longer be purified. Experimental results on six public benchmark datasets namely: 1) Fashion-MNIST, 2) CIFAR-10, 3) GTSRB, 4) MIO-TCD, 5) Tiny-ImageNet, and 6) MS-Celeb show that the proposed approach can consistently detect adversarial examples and purify or reject them against a variety of adversarial attacks.


Subject(s)
Neural Networks, Computer , Privacy , Bayes Theorem , Algorithms
5.
MethodsX ; 8: 101265, 2021.
Article in English | MEDLINE | ID: mdl-34434787

ABSTRACT

Traditional methods of quantifying osteoblast calcification in culture require the use of calcium sensitive dyes, such as Arsenazo III or Alizarin Red S, which have been successfully used for decades to assess osteogenesis. Because these dyes elicit a colorimetric change when reacted with a cell lysate and are cytotoxic to live cells, they forfeit the ability to trace calcification longitudinally over time. Here, we demonstrate that image analysis and quantification of calcification can be performed from a series of time-lapse images acquired from videos. This method capitalizes on the unique facet of the mineralized extracellular matrix to appear black when viewed with phase contrast optics. This appearance of calcified areas had been previously documented to be characteristic to the formation of bone nodules in vitro. Due to this distinguishable appearance, extracting the information corresponding to calcification through segmentation allowed us to threshold only the pixels that comprise the mineralized areas in the image. Ultimately, this method can be used to quantify calcification yield, rates and kinetics facilitating the analyses of bone-supportive properties of growth factors and morphogens as well as of adverse effects elicited by toxicants. It may also be used on images that were acquired manually.•The method is less error-prone than absorption-based assays since it takes longitudinal measurements from the same cultures•It is cost effective as it foregoes the use of calcium-sensitive dyes•It is automatable and amenable to high-throughput and thus allows the concurrent quantification of multiple parameters of differentiation.

6.
J Biomed Opt ; 26(5)2021 04.
Article in English | MEDLINE | ID: mdl-33928769

ABSTRACT

SIGNIFICANCE: Automated understanding of human embryonic stem cell (hESC) videos is essential for the quantified analysis and classification of various states of hESCs and their health for diverse applications in regenerative medicine. AIM: This paper aims to develop an ensemble method and bagging of deep learning classifiers as a model for hESC classification on a video dataset collected using a phase contrast microscope. APPROACH: The paper describes a deep learning-based random network (RandNet) with an autoencoded feature extractor for the classification of hESCs into six different classes, namely, (1) cell clusters, (2) debris, (3) unattached cells, (4) attached cells, (5) dynamically blebbing cells, and (6) apoptotically blebbing cells. The approach uses unlabeled data to pre-train the autoencoder network and fine-tunes it using the available annotated data. RESULTS: The proposed approach achieves a classification accuracy of 97.23 ± 0.94 % and outperforms the state-of-the-art methods. Additionally, the approach has a very low training cost compared with the other deep-learning-based approaches, and it can be used as a tool for annotating new videos, saving enormous hours of manual labor. CONCLUSIONS: RandNet is an efficient and effective method that uses a combination of subnetworks trained using both labeled and unlabeled data to classify hESC images.


Subject(s)
Human Embryonic Stem Cells , Humans , Neural Networks, Computer
7.
Sensors (Basel) ; 21(3)2021 Feb 01.
Article in English | MEDLINE | ID: mdl-33535456

ABSTRACT

In this paper, a transmission-guided lightweight neural network called TGL-Net is proposed for efficient image dehazing. Unlike most current dehazing methods that produce simulated transmission maps from depth data and haze-free images, in the proposed work, guided transmission maps are computed automatically using a filter-refined dark-channel-prior (F-DCP) method from real-world hazy images as a regularizer, which facilitates network training not only on synthetic data, but also on natural images. A double-error loss function that combines the errors of a transmission map with the errors of a dehazed image is used to guide network training. The method provides a feasible solution for introducing priors obtained from traditional non-learning-based image processing techniques as a guide for training deep neural networks. Extensive experimental results demonstrate that, in terms of several reference and non-reference evaluation criteria for real-world images, the proposed method can achieve state-of-the-art performance with a much smaller network size and with significant improvements in efficiency resulting from the training guidance.

8.
Sensors (Basel) ; 22(1)2021 Dec 29.
Article in English | MEDLINE | ID: mdl-35009749

ABSTRACT

Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it may be necessary for the researcher to use most of their data for testing, as opposed to model training. Therefore, the goal of this paper is to perform dataset augmentation using generative adversarial networks (GAN) to increase the classification accuracy of deep convolutional neural networks (CNN) trained on induced pluripotent stem cell microscopy images. The main challenges are: 1. modeling complex data using GAN and 2. training neural networks on augmented datasets that contain generated data. To address these challenges, a temporally constrained, hierarchical classification scheme that exploits domain knowledge is employed for model learning. First, image patches of cell colonies from gray-scale microscopy images are generated using GAN, and then these images are added to the real dataset and used to address class imbalances at multiple stages of training. Overall, a 2% increase in both true positive rate and F1-score is observed using this method as compared to a straightforward, imbalanced classification network, with some greater improvements on a classwise basis. This work demonstrates that synergistic model design involving domain knowledge is key for biological image analysis and improves model learning in high-throughput scenarios.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Stem Cells
9.
Math Biosci Eng ; 16(6): 6858-6873, 2019 07 29.
Article in English | MEDLINE | ID: mdl-31698592

ABSTRACT

he PM2.5 air quality index (AQI) measurements from government-built supersites are accurate but cannot provide a dense coverage of monitoring areas. Low-cost PM2.5 sensors can be used to deploy a fine-grained internet-of-things (IoT) as a complement to government facilities. Calibration of low-cost sensors by reference to high-accuracy supersites is thus essential. Moreover, the imputation for missing-value in training data may affect the calibration result, the best performance of calibration model requires hyperparameter optimization, and the affecting factors of PM2.5 concentrations such as climate, geographical landscapes and anthropogenic activities are uncertain in spatial and temporal dimensions. In this paper, an ensemble learning for imputation method selection, calibration model hyperparameterization, and spatiotemporal training data composition is proposed. Three government supersites are chosen in central Taiwan for the deployment of low-cost sensors and hourly PM2.5 measurements are collected for 60 days for conducting experiments. Three optimizers, Sobol sequence, Nelder and Meads, and particle swarm optimization (PSO), are compared for evaluating their performances with various versions of ensembles. The best calibration results are obtained by using PSO, and the improvement ratios with respect to R2, RMSE, and NME, are 4.92%, 52.96%, and 56.85%, respectively.

10.
PLoS One ; 14(3): e0212849, 2019.
Article in English | MEDLINE | ID: mdl-30840685

ABSTRACT

Human embryonic stem cells (hESC), derived from the blastocysts, provide unique cellular models for numerous potential applications. They have great promise in the treatment of diseases such as Parkinson's, Huntington's, diabetes mellitus, etc. hESC are a reliable developmental model for early embryonic growth because of their ability to divide indefinitely (pluripotency), and differentiate, or functionally change, into any adult cell type. Their adaptation to toxicological studies is particularly attractive as pluripotent stem cells can be used to model various stages of prenatal development. Automated detection and classification of human embryonic stem cell in videos is of great interest among biologists for quantified analysis of various states of hESC in experimental work. Currently video annotation is done by hand, a process which is very time consuming and exhaustive. To solve this problem, this paper introduces DeephESC 2.0 an automated machine learning approach consisting of two parts: (a) Generative Multi Adversarial Networks (GMAN) for generating synthetic images of hESC, (b) a hierarchical classification system consisting of Convolution Neural Networks (CNN) and Triplet CNNs to classify phase contrast hESC images into six different classes namely: Cell clusters, Debris, Unattached cells, Attached cells, Dynamically Blebbing cells and Apoptically Blebbing cells. The approach is totally non-invasive and does not require any chemical or staining of hESC. DeephESC 2.0 is able to classify hESC images with an accuracy of 93.23% out performing state-of-the-art approaches by at least 20%. Furthermore, DeephESC 2.0 is able to generate large number of synthetic images which can be used for augmenting the dataset. Experimental results show that training DeephESC 2.0 exclusively on a large amount of synthetic images helps to improve the performance of the classifier on original images from 93.23% to 94.46%. This paper also evaluates the quality of the generated synthetic images using the Structural SIMilarity (SSIM) index, Peak Signal to Noise ratio (PSNR) and statistical p-value metrics and compares them with state-of-the-art approaches for generating synthetic images. DeephESC 2.0 saves hundreds of hours of manual labor which would otherwise be spent on manually/semi-manually annotating more and more videos.


Subject(s)
Human Embryonic Stem Cells/classification , Image Processing, Computer-Assisted/methods , Machine Learning , Video Recording , Cells, Cultured , Humans , Intravital Microscopy , Neural Networks, Computer , Signal-To-Noise Ratio
11.
Toxicol Appl Pharmacol ; 363: 111-121, 2019 01 15.
Article in English | MEDLINE | ID: mdl-30468815

ABSTRACT

Epidemiological studies suggest tobacco consumption as a probable environmental factor for a variety of congenital anomalies, including low bone mass and increased fracture risk. Despite intensive public health initiatives to publicize the detrimental effects of tobacco use during pregnancy, approximately 10-20% of women in the United States still consume tobacco during pregnancy, some opting for so-called harm-reduction tobacco. These include Snus, a type of orally-consumed yet spit-free chewing tobacco, which is purported to expose users to fewer harmful chemicals. Concerns remain from a developmental health perspective since Snus has not reduced overall health risk to consumers and virtually nothing is known about whether skeletal problems from intrauterine exposure arise in the embryo. Utilizing a newly developed video-based calcification assay we determined that extracts from Snus tobacco hindered calcification of osteoblasts derived from pluripotent stem cells early on in their differentiation. Nicotine, a major component of tobacco products, had no measurable effect in the tested concentration range. However, through the extraction of video data, we determined that the tobacco-specific nitrosamine N'-nitrosonornicotine caused a reduction in calcification with similar kinetics as the complete Snus extract. From measurements of actual nitrosamine concentrations in Snus tobacco extract we furthermore conclude that N'-nitrosonornicotine has the potential to be a major trigger of developmental osteotoxicity caused by Snus tobacco.


Subject(s)
Calcification, Physiologic/drug effects , Human Embryonic Stem Cells/drug effects , Nitrosamines/toxicity , Osteogenesis/drug effects , Tobacco, Smokeless/toxicity , Cell Line , Human Embryonic Stem Cells/physiology , Humans , Intravital Microscopy , Musculoskeletal Abnormalities/chemically induced , Musculoskeletal Abnormalities/prevention & control , Osteoblasts/drug effects , Osteoblasts/physiology , Plant Extracts/chemistry , Plant Extracts/toxicity , Time-Lapse Imaging , Nicotiana/chemistry , Nicotiana/toxicity , United States
12.
Sci Rep ; 8(1): 16354, 2018 11 05.
Article in English | MEDLINE | ID: mdl-30397207

ABSTRACT

There is a critical need for better analytical methods to study mitochondria in normal and diseased states. Mitochondrial image analysis is typically done on still images using slow manual methods or automated methods of limited types of features. MitoMo integrated software overcomes these bottlenecks by automating rapid unbiased quantitative analysis of mitochondrial morphology, texture, motion, and morphogenesis and advances machine-learning classification to predict cell health by combining features. Our pixel-based approach for motion analysis evaluates the magnitude and direction of motion of: (1) molecules within mitochondria, (2) individual mitochondria, and (3) distinct morphological classes of mitochondria. MitoMo allows analysis of mitochondrial morphogenesis in time-lapse videos to study early progression of cellular stress. Biological applications are presented including: (1) establishing normal phenotypes of mitochondria in different cell types; (2) quantifying stress-induced mitochondrial hyperfusion in cells treated with an environmental toxicant, (3) tracking morphogenesis in mitochondria undergoing swelling, and (4) evaluating early changes in cell health when morphological abnormalities are not apparent. MitoMo unlocks new information on mitochondrial phenotypes and dynamics by enabling deep analysis of mitochondrial features in any cell type and can be applied to a broad spectrum of research problems in cell biology, drug testing, toxicology, and medicine.


Subject(s)
Computational Biology/methods , Machine Learning , Mitochondria/metabolism , A549 Cells , Humans , Mitochondria/drug effects , Movement/drug effects , Phenotype , Selenium/pharmacology , Stress, Physiological , Supervised Machine Learning
13.
PLoS One ; 12(8): e0182958, 2017.
Article in English | MEDLINE | ID: mdl-28827828

ABSTRACT

Cofilin and other Actin-regulating proteins are essential in regulating the shape of dendritic spines, which are sites of neuronal communications in the brain, and their malfunctions are implicated in neurodegeneration related to aging. The analysis of cofilin motility in dendritic spines using fluorescence video-microscopy may allow for the discovery of its effects on synaptic functions. To date, the flow of cofilin has not been analyzed by automatic means. This paper presents Dendrite Protein Analysis (DendritePA), a novel automated pattern recognition software to analyze protein trafficking in neurons. Using spatiotemporal information present in multichannel fluorescence videos, the DendritePA generates a temporal maximum intensity projection that enhances the signal-to-noise ratio of important biological structures, segments and tracks dendritic spines, estimates the density of proteins in spines, and analyzes the flux of proteins through the dendrite/spine boundary. The motion of a dendritic spine is used to generate spine energy images, which are used to automatically classify the shape of common dendritic spines such as stubby, mushroom, or thin. By tracking dendritic spines over time and using their intensity profiles, the system can analyze the flux patterns of cofilin and other fluorescently stained proteins. The cofilin flux patterns are found to correlate with the dynamic changes in dendritic spine shapes. Our results also have shown that the activation of cofilin using genetic manipulations leads to immature spines while its inhibition results in an increase in mature spines.


Subject(s)
Automation , Dendritic Spines/metabolism , Nerve Tissue Proteins/metabolism , Animals , Cells, Cultured , Mice , Software
14.
IEEE Trans Image Process ; 25(5): 1993-2004, 2016 May.
Article in English | MEDLINE | ID: mdl-26960226

ABSTRACT

The growth of pollen tubes is of significant interest in plant cell biology, as it provides an understanding of internal cell dynamics that affect observable structural characteristics such as cell diameter, length, and growth rate. However, these parameters can only be measured in experimental videos if the complete shape of the cell is known. The challenge is to accurately obtain the cell boundary in noisy video images. Usually, these measurements are performed by a scientist who manually draws regions-of-interest on the images displayed on a computer screen. In this paper, a new automated technique is presented for boundary detection by fusing fluorescence and brightfield images, and a new efficient method of obtaining the final cell boundary through the process of Seam Carving is proposed. This approach takes advantage of the nature of the fusion process and also the shape of the pollen tube to efficiently search for the optimal cell boundary. In video segmentation, the first two frames are used to initialize the segmentation process by creating a search space based on a parametric model of the cell shape. Updates to the search space are performed based on the location of past segmentations and a prediction of the next segmentation.Experimental results show comparable accuracy to a previous method, but significant decrease in processing time. This has the potential for real time applications in pollen tube microscopy.


Subject(s)
Image Processing, Computer-Assisted/methods , Pollen Tube/anatomy & histology , Pollen Tube/growth & development , Video Recording/methods , Algorithms , Microscopy, Fluorescence , Models, Biological
15.
IEEE Trans Pattern Anal Mach Intell ; 38(4): 785-99, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26959678

ABSTRACT

Describing visual image contents by semantic concepts is an effective and straightforward way to facilitate various high level applications. Inferring semantic concepts from low-level pictorial feature analysis is challenging due to the semantic gap problem, while manually labeling concepts is unwise because of a large number of images in both online and offline collections. In this paper, we present a novel approach to automatically generate intermediate image descriptors by exploiting concept co-occurrence patterns in the pre-labeled training set that renders it possible to depict complex scene images semantically. Our work is motivated by the fact that multiple concepts that frequently co-occur across images form patterns which could provide contextual cues for individual concept inference. We discover the co-occurrence patterns as hierarchical communities by graph modularity maximization in a network with nodes and edges representing concepts and co-occurrence relationships separately. A random walk process working on the inferred concept probabilities with the discovered co-occurrence patterns is applied to acquire the refined concept signature representation. Through experiments in automatic image annotation and semantic image retrieval on several challenging datasets, we demonstrate the effectiveness of the proposed concept co-occurrence patterns as well as the concept signature representation in comparison with state-of-the-art approaches.

16.
PLoS One ; 11(2): e0148642, 2016.
Article in English | MEDLINE | ID: mdl-26848582

ABSTRACT

There is a foundational need for quality control tools in stem cell laboratories engaged in basic research, regenerative therapies, and toxicological studies. These tools require automated methods for evaluating cell processes and quality during in vitro passaging, expansion, maintenance, and differentiation. In this paper, an unbiased, automated high-content profiling toolkit, StemCellQC, is presented that non-invasively extracts information on cell quality and cellular processes from time-lapse phase-contrast videos. Twenty four (24) morphological and dynamic features were analyzed in healthy, unhealthy, and dying human embryonic stem cell (hESC) colonies to identify those features that were affected in each group. Multiple features differed in the healthy versus unhealthy/dying groups, and these features were linked to growth, motility, and death. Biomarkers were discovered that predicted cell processes before they were detectable by manual observation. StemCellQC distinguished healthy and unhealthy/dying hESC colonies with 96% accuracy by non-invasively measuring and tracking dynamic and morphological features over 48 hours. Changes in cellular processes can be monitored by StemCellQC and predictions can be made about the quality of pluripotent stem cell colonies. This toolkit reduced the time and resources required to track multiple pluripotent stem cell colonies and eliminated handling errors and false classifications due to human bias. StemCellQC provided both user-specified and classifier-determined analysis in cases where the affected features are not intuitive or anticipated. Video analysis algorithms allowed assessment of biological phenomena using automatic detection analysis, which can aid facilities where maintaining stem cell quality and/or monitoring changes in cellular processes are essential. In the future StemCellQC can be expanded to include other features, cell types, treatments, and differentiating cells.


Subject(s)
Biomarkers , Computational Biology/methods , Pluripotent Stem Cells/cytology , Pluripotent Stem Cells/physiology , Video Recording , Cell Culture Techniques , Data Mining/methods , Embryonic Stem Cells , Humans , Software
17.
Article in English | MEDLINE | ID: mdl-26394438

ABSTRACT

Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.


Subject(s)
Computational Biology/methods , Human Embryonic Stem Cells/cytology , Human Embryonic Stem Cells/pathology , Image Processing, Computer-Assisted/methods , Microscopy, Video/methods , Algorithms , Humans
18.
Appl Opt ; 54(11): 3372-82, 2015 Apr 10.
Article in English | MEDLINE | ID: mdl-25967326

ABSTRACT

We propose a context guided belief propagation (BP) algorithm to perform high spatial resolution multispectral imagery (HSRMI) classification efficiently utilizing superpixel representation. One important characteristic of HSRMI is that different land cover objects possess a similar spectral property. This property is exploited to speed up the standard BP (SBP) in the classification process. Specifically, we leverage this property of HSRMI as context information to guide messages passing in SBP. Furthermore, the spectral and structural features extracted at the superpixel level are fed into a Markov random field framework to address the challenge of low interclass variation in HSRMI classification by minimizing the discrete energy through context guided BP (CBP). Experiments show that the proposed CBP is significantly faster than the SBP while retaining similar performance as compared with SBP. Compared to the baseline methods, higher classification accuracy is achieved by the proposed CBP when the context information is used with both spectral and structural features.

19.
IEEE Trans Biomed Eng ; 62(1): 145-53, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25073162

ABSTRACT

Mild traumatic brain injury (mTBI) appears as low contrast lesions in magnetic resonance (MR) imaging. Standard automated detection approaches cannot detect the subtle changes caused by the lesions. The use of context has become integral for the detection of low contrast objects in images. Context is any information that can be used for object detection but is not directly due to the physical appearance of an object in an image. In this paper, new low-level static and dynamic context features are proposed and integrated into a discriminative voxel-level classifier to improve the detection of mTBI lesions. Visual features, including multiple texture measures, are used to give an initial estimate of a lesion. From the initial estimate novel proximity and directional distance, contextual features are calculated and used as features for another classifier. This feature takes advantage of spatial information given by the initial lesion estimate using only the visual features. Dynamic context is captured by the proposed posterior marginal edge distance context feature, which measures the distance from a hard estimate of the lesion at a previous time point. The approach is validated on a temporal mTBI rat model dataset and shown to have improved dice score and convergence compared to other state-of-the-art approaches. Analysis of feature importance and versatility of the approach on other datasets are also provided.


Subject(s)
Algorithms , Brain Injuries/pathology , Brain/pathology , Image Interpretation, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Animals , Image Enhancement/methods , Imaging, Three-Dimensional/methods , Rats , Rats, Sprague-Dawley , Reproducibility of Results , Sensitivity and Specificity
20.
Med Image Anal ; 18(7): 1059-69, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25000294

ABSTRACT

We compared the efficacy of three automated brain injury detection methods, namely symmetry-integrated region growing (SIRG), hierarchical region splitting (HRS) and modified watershed segmentation (MWS) in human and animal magnetic resonance imaging (MRI) datasets for the detection of hypoxic ischemic injuries (HIIs). Diffusion weighted imaging (DWI, 1.5T) data from neonatal arterial ischemic stroke (AIS) patients, as well as T2-weighted imaging (T2WI, 11.7T, 4.7T) at seven different time-points (1, 4, 7, 10, 17, 24 and 31 days post HII) in rat-pup model of hypoxic ischemic injury were used to assess the temporal efficacy of our computational approaches. Sensitivity, specificity, and similarity were used as performance metrics based on manual ('gold standard') injury detection to quantify comparisons. When compared to the manual gold standard, automated injury location results from SIRG performed the best in 62% of the data, while 29% for HRS and 9% for MWS. Injury severity detection revealed that SIRG performed the best in 67% cases while 33% for HRS. Prior information is required by HRS and MWS, but not by SIRG. However, SIRG is sensitive to parameter-tuning, while HRS and MWS are not. Among these methods, SIRG performs the best in detecting lesion volumes; HRS is the most robust, while MWS lags behind in both respects.


Subject(s)
Hypoxia-Ischemia, Brain/pathology , Image Interpretation, Computer-Assisted/methods , Infant, Newborn, Diseases/pathology , Magnetic Resonance Imaging/methods , Algorithms , Animals , Animals, Newborn , Humans , Infant, Newborn , Rats , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...