Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
Mach Vis Appl ; 342023.
Article in English | MEDLINE | ID: mdl-38586579

ABSTRACT

Accurate and timely identification of regions damaged by a natural disaster is critical for assessing the damages and reducing the human life cost. The increasing availability of satellite imagery and other remote sensing data has triggered research activities on development of algorithms for detection and monitoring of natural events. Here, we introduce an unsupervised subspace learning-based methodology that uses multi-temporal and multi-spectral satellite images to identify regions damaged by natural disasters. It first performs region delineation, matching, and fusion. Next, it applies subspace learning in the joint regional space to produce a change map. It identifies the damaged regions by estimating probabilistic subspace distances and rejecting the non-disaster changes. We evaluated the performance of our method on seven disaster datasets including four wildfire events, two flooding events, and a earthquake/tsunami event. We validated our results by calculating the dice similarity coefficient (DSC), and accuracy of classification between our disaster maps and ground-truth data. Our method produced average DSC values of 0.833 and 0.736, for wildfires and floods, respectively, and overall DSC of 0.855 for the tsunami event. The evaluation results support the applicability of our method to multiple types of natural disasters.

2.
Adv Vis Comput ; 12510: 728-741, 2020 Oct.
Article in English | MEDLINE | ID: mdl-34859246

ABSTRACT

In this work, we propose a layer to retarget feature maps in Convolutional Neural Networks (CNNs). Our "Retarget" layer densely samples values for each feature map channel at locations inferred by our proposed spatial attention regressor. Our layer increments an existing saliency-based distortion layer by replacing its convolutional components with depthwise convolutions. This reformulation with the tuning of its hyper-parameters makes the Retarget layer applicable at any depth of feed-forward CNNs. Keeping in spirit with Content-Aware Image Resizing retargeting methods, we introduce our layers at the bottlenecks of three pre-trained CNNs. We validate our approach on the ImageCLEF2013, ImageCLEF2015, and ImageCLEF2016 document subfigure classification task. Our redesigned DenseNet121 model with the Retarget layer achieved state-of-the-art results under the visual category when no data augmentations were performed. Performing spatial sampling for each channel of the feature maps at deeper layers exponentially increases computational cost and memory requirements. To address this, we experiment with an approximation of the nearest neighbor interpolation and show consistent improvement over the baseline models and other state-of-the-art attention models. The code is available at https://github.com/VimsLab/CNN-Retarget.

3.
Proc Int Conf Image Proc ; 2020: 2506-2510, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33758579

ABSTRACT

The actin filament plays a fundamental role in numerous cellular processes such as cell growth, proliferation, migration, division, and locomotion. The actin cytoskeleton is highly dynamical and can polymerize and depolymerize in a very short time under different stimuli. To study the mechanics of actin filament, quantifying the length and number of actin filaments in each time frame of microscopic images is fundamental. In this paper, we adopt a Convolutional Neural Network (CNN) to segment actin filaments first, and then we utilize a modified Resnet to detect junctions and endpoints of filaments. With binary segmentation and detected keypoints, we apply a fast marching algorithm to obtain the number and length of each actin filament in microscopic images. We have also collected a dataset of 10 microscopic images of actin filaments to test our method. Our experiments show that our approach outperforms other existing approaches tackling this problem regarding both accuracy and inference time.

4.
Turk J Urol ; 45(5): 357-365, 2019 09.
Article in English | MEDLINE | ID: mdl-31509508

ABSTRACT

OBJECTIVE: Increased computational power and improved visualization hardware have generated more opportunities for virtual reality (VR) applications in healthcare. In this study, we test the feasibility of a VR-assisted surgical navigation system for robotic-assisted radical prostatectomy. MATERIAL AND METHODS: The prostate, all magnetic resonance imaging (MRI) visible tumors, and important anatomic structures like the neurovascular bundles, seminal vesicles, bladder, and rectum were contoured on a multiparametric MRI using an in-house segmentation software. Three-dimensional (3-D) VR models were rendered and evaluated in a side room of the operating room. While interacting with the VR platform, a real-time stereo video capture of the in situ prostate was obtained to render a second 3-D model. The MRI-based model was then overlaid on the real-time model by using an automated alignment algorithm. RESULTS: Ten patients were included in this study. All MRI-based VR models were examined by surgeons immediately prior to surgery and at important steps where visualization of the tumors and their proximity to surrounding anatomic structures were critical. This was mainly during the preparation of the prostatic pedicles, neurovascular plexus, the apex, and bladder neck. All participants found the system useful, especially for tumors with locally aggressive growth patterns. For small and centrally located tumors, the system was not considered beneficial due to lack of integration into the robotic console. A fully integrated system with real-time overlays within the robotic stereo viewer was found to be the ideal scenario. CONCLUSION: We deployed a preliminary VR-assisted surgical navigation tool for robotic-assisted radical prostatectomies.

5.
Article in English | MEDLINE | ID: mdl-33859868

ABSTRACT

Filamentous structures play an important role in biological systems. Extracting individual filaments is fundamental for analyzing and quantifying related biological processes. However, segmenting filamentous structures at an instance level is hampered by their complex architecture, uniform appearance, and image quality. In this paper, we introduce an orientation-aware neural network, which contains six orientation-associated branches. Each branch detects filaments with specific range of orientations, thus separating them at junctions, and turning intersections to overpasses. A terminus pairing algorithm is also proposed to regroup filaments from different branches, and achieve individual filaments extraction. We create a synthetic dataset to train our network, and annotate real full resolution microscopy images of microtubules to test our approach. Our experiments have shown that our proposed method outperforms most existing approaches for filaments extraction. We also show that our approach works on other similar structures with a road network dataset.

6.
Comput Biol Med ; 99: 53-62, 2018 08 01.
Article in English | MEDLINE | ID: mdl-29886261

ABSTRACT

Detecting and classifying cardiac arrhythmias is critical to the diagnosis of patients with cardiac abnormalities. In this paper, a novel approach based on deep learning methodology is proposed for the classification of single-lead electrocardiogram (ECG) signals. We demonstrate the application of the Restricted Boltzmann Machine (RBM) and deep belief networks (DBN) for ECG classification following detection of ventricular and supraventricular heartbeats using single-lead ECG. The effectiveness of this proposed algorithm is illustrated using real ECG signals from the widely-used MIT-BIH database. Simulation results demonstrate that with a suitable choice of parameters, RBM and DBN can achieve high average recognition accuracies of ventricular ectopic beats (93.63%) and of supraventricular ectopic beats (95.57%) at a low sampling rate of 114 Hz. Experimental results indicate that classifiers built into this deep learning-based framework achieved state-of-the art performance models at lower sampling rates and simple features when compared to traditional methods. Further, employing features extracted at a sampling rate of 114 Hz when combined with deep learning provided enough discriminatory power for the classification task. This performance is comparable to that of traditional methods and uses a much lower sampling rate and simpler features. Thus, our proposed deep neural network algorithm demonstrates that deep learning-based methods offer accurate ECG classification and could potentially be extended to other physiological signal classifications, such as those in arterial blood pressure (ABP), nerve conduction (EMG), and heart rate variability (HRV) studies.


Subject(s)
Arrhythmias, Cardiac/physiopathology , Databases, Factual , Deep Learning , Electrocardiography , Signal Processing, Computer-Assisted , Humans
7.
Elife ; 72018 01 17.
Article in English | MEDLINE | ID: mdl-29338837

ABSTRACT

Dynamic tubular extensions from chloroplasts called stromules have recently been shown to connect with nuclei and function during innate immunity. We demonstrate that stromules extend along microtubules (MTs) and MT organization directly affects stromule dynamics since stabilization of MTs chemically or genetically increases stromule numbers and length. Although actin filaments (AFs) are not required for stromule extension, they provide anchor points for stromules. Interestingly, there is a strong correlation between the direction of stromules from chloroplasts and the direction of chloroplast movement. Stromule-directed chloroplast movement was observed in steady-state conditions without immune induction, suggesting it is a general function of stromules in epidermal cells. Our results show that MTs and AFs may facilitate perinuclear clustering of chloroplasts during an innate immune response. We propose a model in which stromules extend along MTs and connect to AF anchor points surrounding nuclei, facilitating stromule-directed movement of chloroplasts to nuclei during innate immunity.


Subject(s)
Actins/metabolism , Chloroplasts/metabolism , Epidermal Cells/metabolism , Immunity, Innate , Microtubules/metabolism , Movement , Plant Epidermis/cytology , Plant Epidermis/immunology , Nicotiana
8.
Microsc Res Tech ; 81(2): 141-152, 2018 Feb.
Article in English | MEDLINE | ID: mdl-27342138

ABSTRACT

The study of phenotypic variation in plant pathogenesis provides fundamental information about the nature of disease resistance. Cellular mechanisms that alter pathogenesis can be elucidated with confocal microscopy; however, systematic phenotyping platforms-from sample processing to image analysis-to investigate this do not exist. We have developed a platform for 3D phenotyping of cellular features underlying variation in disease development by fluorescence-specific resolution of host and pathogen interactions across time (4D). A confocal microscopy phenotyping platform compatible with different maize-fungal pathosystems (fungi: Setosphaeria turcica, Cochliobolus heterostrophus, and Cercospora zeae-maydis) was developed. Protocols and techniques were standardized for sample fixation, optical clearing, species-specific combinatorial fluorescence staining, multisample imaging, and image processing for investigation at the macroscale. The sample preparation methods presented here overcome challenges to fluorescence imaging such as specimen thickness and topography as well as physiological characteristics of the samples such as tissue autofluorescence and presence of cuticle. The resulting imaging techniques provide interesting qualitative and quantitative information not possible with conventional light or electron 2D imaging. Microsc. Res. Tech., 81:141-152, 2018. © 2016 Wiley Periodicals, Inc.


Subject(s)
Fungi/pathogenicity , Image Processing, Computer-Assisted/methods , Microscopy, Confocal/methods , Zea mays/microbiology , Automation , Optical Imaging/methods , Plant Diseases/microbiology , Specimen Handling/methods , Staining and Labeling/methods
9.
Bioinformatics ; 34(7): 1192-1199, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29040394

ABSTRACT

Motivation: Images convey essential information in biomedical publications. As such, there is a growing interest within the bio-curation and the bio-databases communities, to store images within publications as evidence for biomedical processes and for experimental results. However, many of the images in biomedical publications are compound images consisting of multiple panels, where each individual panel potentially conveys a different type of information. Segmenting such images into constituent panels is an essential first step toward utilizing images. Results: In this article, we develop a new compound image segmentation system, FigSplit, which is based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentation is inaccurate. Experimental results show the effectiveness of our method compared with other methods. Availability and implementation: The system is publicly available for use at: https://www.eecis.udel.edu/~compbio/FigSplit. The code is available upon request. Contact: shatkay@udel.edu. Supplementary information: Supplementary data are available online at Bioinformatics.


Subject(s)
Computational Biology/methods , Pattern Recognition, Automated , Software , Algorithms , Computer Graphics
10.
Article in English | MEDLINE | ID: mdl-30637411

ABSTRACT

Many of the figures in biomedical publications are compound figures consisting of multiple panels. Segmenting such figures into constituent panels is an essential first step for harvesting the visual information within the biomedical documents. Current figure separation methods are based primarily on gap-detection and suffer from over- and under-segmentation. In this paper, we propose a new compound figure segmentation scheme based on Connected Component Analysis. To overcome shortcomings typically manifested by existing methods, we develop a quality assessment step for evaluating and modifying segmentations. Two methods are proposed to re-segment the images if the initial segmentations are inaccurate. Experiments and results comparing the performance of our method to that of other top methods demonstrate the effectiveness of our approach.

11.
IEEE Trans Pattern Anal Mach Intell ; 35(6): 1480-94, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23599060

ABSTRACT

We present novel techniques for single-image vignetting correction based on symmetries of two forms of image gradients: semicircular tangential gradients (SCTG) and radial gradients (RG). For a given image pixel, an SCTG is an image gradient along the tangential direction of a circle centered at the presumed optical center and passing through the pixel. An RG is an image gradient along the radial direction with respect to the optical center. We observe that the symmetry properties of SCTG and RG distributions are closely related to the vignetting in the image. Based on these symmetry properties, we develop an automatic optical center estimation algorithm by minimizing the asymmetry of SCTG distributions, and also present two methods for vignetting estimation based on minimizing the asymmetry of RG distributions. In comparison to prior approaches to single-image vignetting correction, our methods do not rely on image segmentation and they produce more accurate results. Experiments show our techniques to work well for a wide range of images while achieving a speed-up of 3-5 times compared to a state-of-the-art method.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Humans , Image Enhancement/methods
12.
Article in English | MEDLINE | ID: mdl-21339537

ABSTRACT

Contour extraction of Drosophila (fruit fly) embryos is an important step to build a computational system for matching expression pattern of embryonic images to assist the discovery of the nature of genes. Automatic contour extraction of embryos is challenging due to severe image variations, including 1) the size, orientation, shape, and appearance of an embryo of interest; 2) the neighboring context of an embryo of interest (such as nontouching and touching neighboring embryos); and 3) illumination circumstance. In this paper, we propose an automatic framework for contour extraction of the embryo of interest in an embryonic image. The proposed framework contains three components. Its first component applies a mixture model of quadratic curves, with statistical features, to initialize the contour of the embryo of interest. An efficient method based on imbalanced image points is proposed to compute model parameters. The second component applies active contour model to refine embryo contours. The third component applies eigen-shape modeling to smooth jaggy contours caused by blurred embryo boundaries. We test the proposed framework on a data set of 8,000 embryonic images, and achieve promising accuracy (88 percent), that is, substantially higher than the-state-of-the-art results.


Subject(s)
Algorithms , Drosophila/embryology , Embryo, Nonmammalian/cytology , Image Processing, Computer-Assisted/methods , Animals , Computational Biology , Drosophila/cytology
13.
IEEE Trans Pattern Anal Mach Intell ; 31(12): 2243-56, 2009 Dec.
Article in English | MEDLINE | ID: mdl-19834144

ABSTRACT

In this paper, we propose a method for robustly determining the vignetting function given only a single image. Our method is designed to handle both textured and untextured regions in order to maximize the use of available information. To extract vignetting information from an image, we present adaptations of segmentation techniques that locate image regions with reliable data for vignetting estimation. Within each image region, our method capitalizes on the frequency characteristics and physical properties of vignetting to distinguish it from other sources of intensity variation. Rejection of outlier pixels is applied to improve the robustness of vignetting estimation. Comprehensive experiments demonstrate the effectiveness of this technique on a broad range of images with both simulated and natural vignetting effects. Causes of failures using the proposed algorithm are also analyzed.

14.
Med Image Comput Comput Assist Interv ; 11(Pt 2): 238-45, 2008.
Article in English | MEDLINE | ID: mdl-18982611

ABSTRACT

We propose to measure quantitatively the opacity property of each pixel in a ground-glass opacity tumor from CT images. Our method results in an opacity map in which each pixel takes opacity value of [0-1]. Given a CT image, our method accomplishes the estimation by constructing a graph Laplacian matrix and solving a linear equations system, with assistance from some manually drawn scribbles for which the opacity values are easy to determine manually. Our method resists noise and is capable of eliminating the negative influence of vessels and other lung parenchyma. Experiments on 40 selected CT slices of 11 patients demonstrate the effectiveness of this technique. The opacity map produced by our method is invaluable in practice. From this map, many features can be extracted to describe the spatial distribution pattern of opacity and used in a computer-aided diagnosis system.


Subject(s)
Algorithms , Lung Neoplasms/diagnostic imaging , Pattern Recognition, Automated/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Humans , Radiographic Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
15.
Electrophoresis ; 29(3): 706-15, 2008 Feb.
Article in English | MEDLINE | ID: mdl-18203251

ABSTRACT

We propose a suite of novel algorithms for image analysis of protein expression images obtained from 2-D electrophoresis. These algorithms are a segmentation algorithm for protein spot identification, and an algorithm for matching protein spots from two corresponding images for differential expression study. The proposed segmentation algorithm employs the watershed transformation, k-means analysis, and distance transform to locate the centroids and to extract the regions of the proteins spots. The proposed spot matching algorithm is an integration of the hierarchical-based and optimization-based methods. The hierarchical method is first used to find corresponding pairs of protein spots satisfying the local cross-correlation and overlapping constraints. The matching energy function based on local structure similarity, image similarity, and spatial constraints is then formulated and optimized. Our new algorithm suite has been extensively tested on synthetic and actual 2-D gel images from various biological experiments, and in quantitative comparisons with ImageMaster2D Platinum the proposed algorithms exhibit better spot detection and spot matching.


Subject(s)
Algorithms , Electrophoresis, Gel, Two-Dimensional/statistics & numerical data , Image Processing, Computer-Assisted/statistics & numerical data , Protein Array Analysis/statistics & numerical data , Proteins/isolation & purification , Proteomics/statistics & numerical data , Software , Software Design
16.
Med Image Comput Comput Assist Interv ; 10(Pt 1): 933-41, 2007.
Article in English | MEDLINE | ID: mdl-18051148

ABSTRACT

Dynamic enhancement causes serious problems for registration of contrast enhanced breast MRI, due to variable uptakes of agent on different tissues or even same tissues in the breast. We present an iterative optimization algorithm to de-enhance the dynamic contrast-enhanced breast MRI and then register them for avoiding the effects of enhancement on image registration. In particular, the spatially varying enhancements are modeled by a Markov Random Field, and estimated by a locally smooth function with boundaries using a graph cut algorithm. The de-enhanced images are then registered by conventional B-spline based registration algorithm. These two steps benefit from each other and are repeated until the results converge. Experimental results show that our two-step registration algorithm performs much better than conventional mutual information based registration algorithm. Also, the effects of tumor shrinking in the conventional registration algorithms can be effectively avoided by our registration algorithm.


Subject(s)
Breast Neoplasms/diagnosis , Breast/pathology , Contrast Media , Image Enhancement/methods , Magnetic Resonance Imaging/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Algorithms , Artificial Intelligence , Female , Humans , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Reproducibility of Results , Sensitivity and Specificity
17.
Clin Linguist Phon ; 19(6-7): 515-28, 2005.
Article in English | MEDLINE | ID: mdl-16206480

ABSTRACT

In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned first by a shape based Dynamic Programming (DP) method. The best representation of the speech motion is then obtained by averaging the time aligned contours from different repetitions. Procrustes analysis is used to measure the contour similarity in the time alignment process and to get the averaged best representation. To get the point correspondence for Procrustes analysis, a nonrigid point correspondence recovery method based on a local stretching model and a global constraint is developed. Synthetic validations and experiments on real tongue motion are also presented in this paper.


Subject(s)
Movement/physiology , Tongue/physiology , Algorithms , Biomechanical Phenomena , Humans , Imaging, Three-Dimensional , Phonetics , Reproducibility of Results , Time Factors , Tongue/anatomy & histology
18.
Clin Linguist Phon ; 19(6-7): 545-54, 2005.
Article in English | MEDLINE | ID: mdl-16206482

ABSTRACT

In this paper, a new automatic contour tracking system, EdgeTrak, for the ultrasound image sequences of human tongue is presented. The images are produced by a head and transducer support system (HATS). The noise and unrelated high-contrast edges in ultrasound images make it very difficult to automatically detect the correct tongue surfaces. In our tracking system, a novel active contour model is developed. Unlike the classical active contour models which only use gradient of the image as the image force, the proposed model incorporates the edge gradient and intensity information in local regions around each snake element. Different from other active contour models that use homogeneity of intensity in a region as the constraint and thus are only applied to closed contours, the proposed model applies local region information to open contours and can be used to track partial tongue surfaces in ultrasound images. The contour orientation is also taken into account so that any unnecessary edges in ultrasound images will be discarded. Dynamic programming is used as the optimisation method in our implementation. The proposed active contour model has been applied to human tongue tracking and its robustness and accuracy have been verified by quantitative comparison analysis to the tracking by speech scientists.


Subject(s)
Tongue/diagnostic imaging , Tongue/physiology , Algorithms , Humans , Logistic Models , Models, Biological , Movement , Reproducibility of Results , Software , Tongue/anatomy & histology , Ultrasonography/instrumentation
SELECTION OF CITATIONS
SEARCH DETAIL
...