Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Comput Biol Med ; 131: 104265, 2021 04.
Article in English | MEDLINE | ID: mdl-33621895

ABSTRACT

Peripheral Blood Smear (PBS) analysis is a vital routine test carried out by medical specialists to assess some health aspects of individuals. The automation of blood analysis has attracted the attention of researchers in recent years, as it will not only save time, money and reduce errors, but also protect and save lives of front-line workers, especially during pandemics. In this work, deep neural networks are trained on a synthetic blood smears dataset to classify fifteen different white blood cell and platelet subtypes and morphological abnormalities. For classifying platelets, a hybrid approach of deep learning and image processing techniques is proposed. This approach improved the platelet classification accuracy and macro-average precision from 82.6% to 98.6% and 76.6%-97.6% respectively. Moreover, for white blood cell classification, a novel scheme for training deep networks is proposed, namely, Enhanced Incremental Training, that automatically recognises and handles classes that confuse and negatively affect neural network predictions. To handle the confusable classes, we also propose a procedure called "training revert". Application of the proposed method has improved the classification accuracy and macro-average precision from 61.5% to 95% and 76.6%-94.27% respectively.


Subject(s)
Deep Learning , Blood Cells , Humans , Image Processing, Computer-Assisted , Leukocytes , Neural Networks, Computer
2.
Comput Biol Med ; 96: 283-293, 2018 05 01.
Article in English | MEDLINE | ID: mdl-29665537

ABSTRACT

Digital breast tomosynthesis (DBT) was developed in the field of breast cancer screening as a new tomographic technique to minimize the limitations of conventional digital mammography breast screening methods. A computer-aided detection (CAD) framework for mass detection in DBT has been developed and is described in this paper. The proposed framework operates on a set of two-dimensional (2D) slices. With plane-to-plane analysis on corresponding 2D slices from each DBT, it automatically learns complex patterns of 2D slices through a deep convolutional neural network (DCNN). It then applies multiple instance learning (MIL) with a randomized trees approach to classify DBT images based on extracted information from 2D slices. This CAD framework was developed and evaluated using 5040 2D image slices derived from 87 DBT volumes. The empirical results demonstrate that this proposed CAD framework achieves much better performance than CAD systems that use hand-crafted features and deep cardinality-restricted Bolzmann machines to detect masses in DBTs.


Subject(s)
Breast Neoplasms/diagnostic imaging , Deep Learning , Mammography/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Algorithms , Breast/diagnostic imaging , Female , Humans
3.
IEEE Trans Image Process ; 24(4): 1282-96, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25667353

ABSTRACT

We propose a variable-length signature for near-duplicate image matching in this paper. An image is represented by a signature, the length of which varies with respect to the number of patches in the image. A new visual descriptor, viz., probabilistic center-symmetric local binary pattern, is proposed to characterize the appearance of each image patch. Beyond each individual patch, the spatial relationships among the patches are captured. In order to compute the similarity between two images, we utilize the earth mover's distance which is good at handling variable-length signatures. The proposed image signature is evaluated in two different applications, i.e., near-duplicate document image retrieval and near-duplicate natural image detection. The promising experimental results demonstrate the validity and effectiveness of the proposed variable-length signature.

4.
IEEE Trans Pattern Anal Mach Intell ; 35(9): 2223-37, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23868781

ABSTRACT

The affinity propagation (AP) clustering algorithm has received much attention in the past few years. AP is appealing because it is efficient, insensitive to initialization, and it produces clusters at a lower error rate than other exemplar-based methods. However, its single-exemplar model becomes inadequate when applied to model multisubclasses in some situations such as scene analysis and character recognition. To remedy this deficiency, we have extended the single-exemplar model to a multi-exemplar one to create a new multi-exemplar affinity propagation (MEAP) algorithm. This new model automatically determines the number of exemplars in each cluster associated with a super exemplar to approximate the subclasses in the category. Solving the model is NP-hard and we tackle it with the max-sum belief propagation to produce neighborhood maximum clusters, with no need to specify beforehand the number of clusters, multi-exemplars, and superexemplars. Also, utilizing the sparsity in the data, we are able to reduce substantially the computational time and storage. Experimental studies have shown MEAP's significant improvements over other algorithms on unsupervised image categorization and the clustering of handwritten digits.


Subject(s)
Cluster Analysis , Image Processing, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Biometric Identification , Databases, Factual , Face/anatomy & histology , Facial Expression , Female , Handwriting , Humans
5.
IEEE Trans Image Process ; 20(7): 1807-21, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21138805

ABSTRACT

A face image can be represented by a combination of large-and small-scale features. It is well-known that the variations of illumination mainly affect the large-scale features (low-frequency components), and not so much the small-scale features. Therefore, in relevant existing methods only the small-scale features are extracted as illumination-invariant features for face recognition, while the large-scale intrinsic features are always ignored. In this paper, we argue that both large-and small-scale features of a face image are important for face restoration and recognition. Moreover, we suggest that illumination normalization should be performed mainly on the large-scale features of a face image rather than on the original face image. A novel method of normalizing both the Small-and Large-scale (S&L) features of a face image is proposed. In this method, a single face image is first decomposed into large-and small-scale features. After that, illumination normalization is mainly performed on the large-scale features, and only a minor correction is made on the small-scale features. Finally, a normalized face image is generated by combining the processed large-and small-scale features. In addition, an optional visual compensation step is suggested for improving the visual quality of the normalized image. Experiments on CMU-PIE, Extended Yale B, and FRGC 2.0 face databases show that by using the proposed method significantly better recognition performance and visual results can be obtained as compared to related state-of-the-art methods.


Subject(s)
Algorithms , Biometric Identification/methods , Face/anatomy & histology , Image Processing, Computer-Assisted/methods , Humans , Principal Component Analysis , Reproducibility of Results
6.
IEEE Trans Syst Man Cybern B Cybern ; 40(6): 1543-54, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20363682

ABSTRACT

To carry out face transformation, this paper presents new numerical algorithms, which consist of two parts, namely, the harmonic models for changes of face characteristics and the splitting techniques for grayness transition. The main method in this paper is a combination of the finite-volume method (FVM) with Delaunay triangulation to solve the Laplace equations in the harmonic transformation of face images. The advantages of the FVM with Delaunay triangulation are given as follows: 1) easy to formulate the linear algebraic equations; 2) good in retaining the pertinent geometric and physical need; and 3) less central processing unit time needed. Numerical and graphical experiments have been conducted for the face transformation from a female (woman) to a male (man), and vice versa. The computed sequential errors are O(N⁻³/²), where N² is the division number of a pixel into subpixels. These computed errors coincide with the analysis on the splitting-shooting method (SSM) with piecewise constant interpolation in the previous paper of Li and Bai. In computation, the average absolute errors of restored pixel grayness can be smaller than 2 out of 256 grayness levels. The FVM is as simple as the finite-difference method (FDM) and as flexible as the finite-element method (FEM). Hence, the FVM is particularly useful when dealing with large face images with a huge number of pixels in shape distortion. The numerical transformation of face images in this paper can be used not only in pattern recognition but also in resampling, image morphing, and computer animation.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Humans
7.
IEEE Trans Pattern Anal Mach Intell ; 27(10): 1509-22, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16237988

ABSTRACT

This paper presents a novel approach for the verification of the word hypotheses generated by a large vocabulary, offline handwritten word recognition system. Given a word image, the recognition system produces a ranked list of the N-best recognition hypotheses consisting of text transcripts, segmentation boundaries of the word hypotheses into characters, and recognition scores. The verification consists of an estimation of the probability of each segment representing a known class of character. Then, character probabilities are combined to produce word confidence scores which are further integrated with the recognition scores produced by the recognition system. The N-best recognition hypothesis list is reranked based on such composite scores. In the end, rejection rules are invoked to either accept the best recognition hypothesis of such a list or to reject the input word image. The use of the verification approach has improved the word recognition rate as well as the reliability of the recognition system, while not causing significant delays in the recognition process. Our approach is described in detail and the experimental results on a large database of unconstrained handwritten words extracted from postal envelopes are presented.


Subject(s)
Algorithms , Artificial Intelligence , Electronic Data Processing/methods , Handwriting , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Computer Graphics , Documentation , Image Enhancement/methods , Models, Statistical , Numerical Analysis, Computer-Assisted , Reading , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted , Subtraction Technique , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...