Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 123: 103917, 2020 08.
Article in English | MEDLINE | ID: mdl-32768052

ABSTRACT

Intestinal parasites are responsible for several diseases in human beings. In order to eliminate the error-prone visual analysis of optical microscopy slides, we have investigated automated, fast, and low-cost systems for the diagnosis of human intestinal parasites. In this work, we present a hybrid approach that combines the opinion of two decision-making systems with complementary properties: (DS1) a simpler system based on very fast handcrafted image feature extraction and support vector machine classification and (DS2) a more complex system based on a deep neural network, Vgg-16, for image feature extraction and classification. DS1 is much faster than DS2, but it is less accurate than DS2. Fortunately, the errors of DS1 are not the same of DS2. During training, we use a validation set to learn the probabilities of misclassification by DS1 on each class based on its confidence values. When DS1 quickly classifies all images from a microscopy slide, the method selects a number of images with higher chances of misclassification for characterization and reclassification by DS2. Our hybrid system can improve the overall effectiveness without compromising efficiency, being suitable for the clinical routine - a strategy that might be suitable for other real applications. As demonstrated on large datasets, the proposed system can achieve, on average, 94.9%, 87.8%, and 92.5% of Cohen's Kappa on helminth eggs, helminth larvae, and protozoa cysts, respectively.


Subject(s)
Parasites , Animals , Humans , Microscopy , Neural Networks, Computer , Support Vector Machine
2.
IEEE Trans Med Imaging ; 19(1): 55-62, 2000 Jan.
Article in English | MEDLINE | ID: mdl-10782619

ABSTRACT

We have been developing general user steered image segmentation strategies for routine use in applications involving a large number of data sets. In the past, we have presented three segmentation paradigms: live wire, live lane, and a three-dimensional (3-D) extension of the live-wire method. In this paper, we introduce an ultra-fast live-wire method, referred to as live wire on the fly, for further reducing user's time compared to the basic live-wire method. In live wire, 3-D/four-dimensional (4-D) object boundaries are segmented in a slice-by-slice fashion. To segment a two-dimensional (2-D) boundary, the user initially picks a point on the boundary and all possible minimum-cost paths from this point to all other points in the image are computed via Dijkstra's algorithm. Subsequently, a live wire is displayed in real time from the initial point to any subsequent position taken by the cursor. If the cursor is close to the desired boundary, the live wire snaps on to the boundary. The cursor is then deposited and a new live-wire segment is found next. The entire 2-D boundary is specified via a set of live-wire segments in this fashion. A drawback of this method is that the speed of optimal path computation depends on image size. On modestly powered computers, for images of even modest size, some sluggishness appears in user interaction, which reduces the overall segmentation efficiency. In this work, we solve this problem by exploiting some known properties of graphs to avoid unnecessary minimum-cost path computation during segmentation. In live wire on the fly, when the user selects a point on the boundary the live-wire segment is computed and displayed in real time from the selected point to any subsequent position of the cursor in the image, even for large images and even on low-powered computers. Based on 492 tracing experiments from an actual medical application, we demonstrate that live wire on the fly is 1.3-31 times faster than live wire for actual segmentation for varying image sizes, although the pure computational part alone is found to be about 120 times faster.


Subject(s)
Computer Graphics , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , User-Computer Interface , Algorithms , Artifacts , Humans
3.
Med Image Anal ; 4(4): 389-402, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11154024

ABSTRACT

We have been developing user-steered image segmentation methods for situations which require considerable human assistance in object definition. In the past, we have presented two paradigms, referred to as live-wire and live-lane, for segmenting 2D/3D/4D object boundaries in a slice-by-slice fashion, and demonstrated that live-wire and live-lane are more repeatable, with a statistical significance level of P < 0.03, and are 1.5-2.5 times faster, with a statistical significance level of P < 0.02, than manual tracing. In this paper, we introduce a 3D generalization of the live-wire approach for segmenting 3D/4D object boundaries which further reduces the time spent by the user in segmentation. In a 2D live-wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment is the minimum-cost path between the two points, described as a set of oriented pixel edges. This segment is found via Dijkstra's algorithm as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary is identified as a set of consecutive boundary segments forming a "closed", "connected", "oriented" contour. The strategy of the 3D extension is that, first, users specify contours via live-wiring on a few slices that are orthogonal to the natural slices of the original scene. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to subsequently trace optimum boundary segments automatically in all natural slices of the 3D scene. A 3D object boundary may define multiple 2D boundaries per slice. The points on each 2D boundary form an ordered set such that when the best boundary segment is computed between each pair of consecutive points, a closed, connected, oriented boundary results. The ordered set of points on each 2D boundary is found from the way the users select the orthogonal slices. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live-wire is more repeatable, with a statistical significance level of P < 0.0001, and 2-6 times faster, with a statistical significance level of P < 0.01, than the 2D live-wire method, and 3-15 times faster than manual tracing.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional , Foot/anatomy & histology , Humans , Magnetic Resonance Imaging
SELECTION OF CITATIONS
SEARCH DETAIL
...