Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 14(6): e0215843, 2019.
Article in English | MEDLINE | ID: mdl-31173591

ABSTRACT

Cell segmentation in microscopy is a challenging problem, since cells are often asymmetric and densely packed. Successful cell segmentation algorithms rely identifying seed points, and are highly sensitive to variablility in cell size. In this paper, we present an efficient and highly parallel formulation for symmetric three-dimensional contour evolution that extends previous work on fast two-dimensional snakes. We provide a formulation for optimization on 3D images, as well as a strategy for accelerating computation on consumer graphics hardware. The proposed software takes advantage of Monte-Carlo sampling schemes in order to speed up convergence and reduce thread divergence. Experimental results show that this method provides superior performance for large 2D and 3D cell localization tasks when compared to existing methods on large 3D brain images.


Subject(s)
Brain/diagnostic imaging , Imaging, Three-Dimensional/methods , Algorithms , Brain/cytology , Cell Size , Monte Carlo Method , Software
2.
Front Neuroanat ; 12: 28, 2018.
Article in English | MEDLINE | ID: mdl-29755325

ABSTRACT

High-throughput imaging techniques, such as Knife-Edge Scanning Microscopy (KESM),are capable of acquiring three-dimensional whole-organ images at sub-micrometer resolution. These images are challenging to segment since they can exceed several terabytes (TB) in size, requiring extremely fast and fully automated algorithms. Staining techniques are limited to contrast agents that can be applied to large samples and imaged in a single pass. This requires maximizing the number of structures labeled in a single channel, resulting in images that are densely packed with spatial features. In this paper, we propose a three-dimensional approach for locating cells based on iterative voting. Due to the computational complexity of this algorithm, a highly efficient GPU implementation is required to make it practical on large data sets. The proposed algorithm has a limited number of input parameters and is highly parallel.

SELECTION OF CITATIONS
SEARCH DETAIL
...