Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Appl Plant Sci ; 8(6): e11369, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32626611

ABSTRACT

PREMISE: Despite the economic significance of insect damage to plants (i.e., herbivory), long-term data documenting changes in herbivory are limited. Millions of pressed plant specimens are now available online and can be used to collect big data on plant-insect interactions during the Anthropocene. METHODS: We initiated development of machine learning methods to automate extraction of herbivory data from herbarium specimens by training an insect damage detector and a damage type classifier on two distantly related plant species (Quercus bicolor and Onoclea sensibilis). We experimented with (1) classifying six types of herbivory and two control categories of undamaged leaf, and (2) detecting two of the damage categories for which several hundred annotations were available. RESULTS: Damage detection results were mixed, with a mean average precision of 45% in the simultaneous detection and classification of two types of damage. However, damage classification on hand-drawn boxes identified the correct type of herbivory 81.5% of the time in eight categories. The damage classifier was accurate for categories with 100 or more test samples. DISCUSSION: These tools are a promising first step for the automation of herbivory data collection. We describe ongoing efforts to increase the accuracy of these models, allowing researchers to extract similar data and apply them to biological hypotheses.

2.
Appl Plant Sci ; 8(6): e11372, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32626613

ABSTRACT

PREMISE: Equisetum is a distinctive vascular plant genus with 15 extant species worldwide. Species identification is complicated by morphological plasticity and frequent hybridization events, leading to a disproportionately high number of misidentified specimens. These may be correctly identified by applying appropriate computer vision tools. METHODS: We hypothesize that aerial stem nodes can provide enough information to distinguish among Equisetum hyemale, E. laevigatum, and E . ×ferrissii, the latter being a hybrid between the other two. An object detector was trained to find nodes on a given image and to distinguish E. hyemale nodes from those of E. laevigatum. A classifier then took statistics from the detection results and classified the given image into one of the three taxa. Both detector and classifier were trained and tested on expert manually annotated images. RESULTS: In our exploratory test set of 30 images, our detector/classifier combination identified all 10 E. laevigatum images correctly, as well as nine out of 10 E. hyemale images, and eight out of 10 E. ×ferrissii images, for a 90% classification accuracy. DISCUSSION: Our results support the notion that computer vision may help with the identification of herbarium specimens once enough manual annotations become available.

3.
Open Ophthalmol J ; 11: 143-151, 2017.
Article in English | MEDLINE | ID: mdl-28761567

ABSTRACT

BACKGROUND: The diagnosis of plus disease in retinopathy of prematurity (ROP) largely determines the need for treatment; however, this diagnosis is subjective. To make the diagnosis of plus disease more objective, semi-automated computer programs (e.g. ROPtool) have been created to quantify vascular dilation and tortuosity. ROPtool can accurately analyze blood vessels only in images with very good quality, but many still images captured by indirect ophthalmoscopy have insufficient image quality for ROPtool analysis. PURPOSE: To evaluate the ability of an image fusion methodology (robust mosaicing) to increase the efficiency and traceability of posterior pole vessel analysis by ROPtool. MATERIALS AND METHODOLOGY: We retrospectively reviewed video indirect ophthalmoscopy images acquired during routine ROP examinations and selected the best unenhanced still image from the video for each infant. Robust mosaicing was used to create an enhanced mosaic image from the same video for each eye. We evaluated the time required for ROPtool analysis as well as ROPtool's ability to analyze vessels in enhanced vs. unenhanced images. RESULTS: We included 39 eyes of 39 infants. ROPtool analysis was faster (125 vs. 152 seconds; p=0.02) in enhanced vs. unenhanced images, respectively. ROPtool was able to trace retinal vessels in more quadrants (143/156, 92% vs 115/156, 74%; p=0.16) in enhanced mosaic vs. unenhanced still images, respectively and in more overall (38/39, 97% vs. 34/39, 87%; p=0.07) enhanced mosaic vs. unenhanced still images, respectively. CONCLUSION: Retinal image enhancement using robust mosaicing advances efforts to automate grading of posterior pole disease in ROP.

4.
IEEE Trans Med Imaging ; 35(7): 1625-35, 2016 07.
Article in English | MEDLINE | ID: mdl-26829784

ABSTRACT

We propose a novel method for tracking cells that are connected through a visible network of membrane junctions. Tissues of this form are common in epithelial cell sheets and resemble planar graphs where each face corresponds to a cell. We leverage this structure and develop a method to track the entire tissue as a deformable graph. This coupled model in which vertices inform the optimal placement of edges and vice versa captures global relationships between tissue components and leads to accurate and robust cell tracking. We compare the performance of our method with that of four reference tracking algorithms on four data sets that present unique tracking challenges. Our method exhibits consistently superior performance in tracking all cells accurately over all image frames, and is robust over a wide range of image intensity and cell shape profiles. This may be an important tool for characterizing tissues of this type especially in the field of developmental biology where automated cell analysis can help elucidate the mechanisms behind controlled cell-shape changes.


Subject(s)
Epithelial Cells , Algorithms , Cell Tracking , Microscopy, Fluorescence
5.
IEEE Trans Pattern Anal Mach Intell ; 37(8): 1688-701, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26353004

ABSTRACT

Tree-like structures are fundamental in nature, and it is often useful to reconstruct the topology of a tree - what connects to what - from a two-dimensional image of it. However, the projected branches often cross in the image: the tree projects to a planar graph, and the inverse problem of reconstructing the topology of the tree from that of the graph is ill-posed. We regularize this problem with a generative, parametric tree-growth model. Under this model, reconstruction is possible in linear time if one knows the direction of each edge in the graph - which edge endpoint is closer to the root of the tree - but becomes NP-hard if the directions are not known. For the latter case, we present a heuristic search algorithm to estimate the most likely topology of a rooted, three-dimensional tree from a single two-dimensional image. Experimental results on retinal vessel, plant root, and synthetic tree data sets show that our methodology is both accurate and efficient.


Subject(s)
Artificial Intelligence , Imaging, Three-Dimensional/methods , Algorithms , Databases, Factual , Humans , Lightning , Retinal Vessels/anatomy & histology , Stochastic Processes , Trees
6.
IEEE Trans Med Imaging ; 34(12): 2518-34, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26068204

ABSTRACT

We propose a novel, graph-theoretic framework for distinguishing arteries from veins in a fundus image. We make use of the underlying vessel topology to better classify small and midsized vessels. We extend our previously proposed tree topology estimation framework by incorporating expert, domain-specific features to construct a simple, yet powerful global likelihood model. We efficiently maximize this model by iteratively exploring the space of possible solutions consistent with the projected vessels. We tested our method on four retinal datasets and achieved classification accuracies of 91.0%, 93.5%, 91.7%, and 90.9%, outperforming existing methods. Our results show the effectiveness of our approach, which is capable of analyzing the entire vasculature, including peripheral vessels, in wide field-of-view fundus photographs. This topology-based method is a potentially important tool for diagnosing diseases with retinal vascular manifestation.


Subject(s)
Image Processing, Computer-Assisted/methods , Retinal Artery/anatomy & histology , Retinal Vein/anatomy & histology , Algorithms , Databases, Factual , Diagnostic Techniques, Ophthalmological , Humans
7.
Biomed Opt Express ; 4(6): 803-21, 2013 Jun 01.
Article in English | MEDLINE | ID: mdl-23761845

ABSTRACT

Variance processing methods in Fourier domain optical coherence tomography (FD-OCT) have enabled depth-resolved visualization of the capillary beds in the retina due to the development of imaging systems capable of acquiring A-scan data in the 100 kHz regime. However, acquisition of volumetric variance data sets still requires several seconds of acquisition time, even with high speed systems. Movement of the subject during this time span is sufficient to corrupt visualization of the vasculature. We demonstrate a method to eliminate motion artifacts in speckle variance FD-OCT images of the retinal vasculature by creating a composite image from multiple volumes of data acquired sequentially. Slight changes in the orientation of the subject's eye relative to the optical system between acquired volumes may result in non-rigid warping of the image. Thus, we use a B-spline based free form deformation method to automatically register variance images from multiple volumes to obtain a motion-free composite image of the retinal vessels. We extend this technique to automatically mosaic individual vascular images into a widefield image of the retinal vasculature.

8.
Biomed Opt Express ; 3(2): 327-39, 2012 Feb 01.
Article in English | MEDLINE | ID: mdl-22312585

ABSTRACT

We present a methodology for extracting the vascular network in the human retina using Dijkstra's shortest-path algorithm. Our method preserves vessel thickness, requires no manual intervention, and follows vessel branching naturally and efficiently. To test our method, we constructed a retinal video indirect ophthalmoscopy (VIO) image database from pediatric patients and compared the segmentations achieved by our method and state-of-the-art approaches to a human-drawn gold standard. Our experimental results show that our algorithm outperforms prior state-of-the-art methods, for both single VIO frames and automatically generated, large field-of-view enhanced mosaics. We have made the corresponding dataset and source code freely available online.

9.
Biomed Opt Express ; 2(10): 2871-87, 2011 Oct 01.
Article in English | MEDLINE | ID: mdl-22091442

ABSTRACT

Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.

11.
IEEE Trans Med Imaging ; 21(12): 1461-7, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12588030

ABSTRACT

Colorectal cancer can easily be prevented provided that the precursors to tumors, small colonic polyps, are detected and removed. Currently, the only definitive examination of the colon is fiber-optic colonoscopy, which is invasive and expensive. Computed tomographic colonography (CTC) is potentially a less costly and less invasive alternative to FOC. It would be desirable to have computer-aided detection (CAD) algorithms to examine the large amount of data CTC provides. Most current CAD algorithms have high false positive rates at the required sensitivity levels. We developed and evaluated a postprocessing algorithm to decrease the false positive rate of such a CAD method without sacrificing sensitivity. Our method attempts to model the way a radiologist recognizes a polyp while scrolling a cross-sectional plane through three-dimensional computed tomography data by classification of the changes in the location of the edges in the two-dimensional plane. We performed a tenfold cross-validation study to assess its performance using sensitivity/specificity analysis on data from 48 patients. The mean specificity over all experiments increased from 0.19 (0.35) to 0.47 (0.56) for a sensitivity of 1.00 (0.95).


Subject(s)
Algorithms , Colonic Polyps/diagnostic imaging , Colonography, Computed Tomographic/methods , Imaging, Three-Dimensional , Radiographic Image Enhancement/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Adult , Aged , Aged, 80 and over , Colonic Polyps/classification , False Negative Reactions , False Positive Reactions , Female , Humans , Male , Middle Aged , Observer Variation , Pattern Recognition, Automated , Quality Control , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...