Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 24(7): 2140-52, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25823034

ABSTRACT

Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.


Subject(s)
Emotions/physiology , Facial Expression , Facial Recognition/physiology , Image Interpretation, Computer-Assisted/methods , Machine Learning , Photography/methods , Algorithms , Humans , Image Enhancement/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique
2.
IEEE Trans Pattern Anal Mach Intell ; 35(10): 2484-97, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23969391

ABSTRACT

Most face recognition systems require faces to be detected and localized a priori. In this paper, an approach to simultaneously detect and localize multiple faces having arbitrary views and different scales is proposed. The main contribution of this paper is the introduction of a face constellation, which enables multiview face detection and localization. In contrast to other multiview approaches that require many manually labeled images for training, the proposed face constellation requires only a single reference image of a face containing two manually indicated reference points for initialization. Subsequent training face images from arbitrary views are automatically added to the constellation (registered to the reference image) based on finding the correspondences between distinctive local features. Thus, the key advantage of the proposed scheme is the minimal manual intervention required to train the face constellation. We also propose an approach to identify distinctive correspondence points between pairs of face images in the presence of a large amount of false matches. To detect and localize multiple faces with arbitrary views, we then propose a probabilistic classifier-based formulation to evaluate whether a local feature cluster corresponds to a face. Experimental results conducted on the FERET, CMU, and FDDB datasets show that our proposed approach has better performance compared to the state-of-the-art approaches for detecting faces with arbitrary pose.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Photography/methods , Image Enhancement/methods , User-Computer Interface
3.
Magn Reson Imaging ; 29(9): 1255-66, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21873011

ABSTRACT

This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.


Subject(s)
Diagnostic Imaging/methods , Image Processing, Computer-Assisted/methods , Joints/pathology , Adult , Algorithms , Automation , Brain/pathology , Brain Mapping/methods , Computer Simulation , Humans , Imaging, Three-Dimensional , Likelihood Functions , Models, Statistical , Models, Theoretical , Principal Component Analysis
4.
Comput Med Imaging Graph ; 34(5): 354-61, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20083384

ABSTRACT

A new joint parametric and nonparametric curve evolution algorithm is proposed for medical image segmentation. In this algorithm, both the nonlinear space of level set function (nonparametric model) and the linear subspace of level set function spanned by the principle components (parametric model) are employed in the evolution procedure. The nonparametric curve evolution can drive the curve precisely to object boundaries while the parametric model acts as a statistical constraint based on the Bayesian framework in order to match object shape more robustly. As a result, our new algorithm is as robust as the parametric curve evolution algorithms and at the same time, yields more accurate segmentation results by using the shape prior information. Comparative results on segmenting ventricle frontal horns and putamen shapes in MR brain images confirm the advantages of the proposed joint curve evolution algorithm.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Computer Simulation , Models, Biological , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
5.
IEEE Trans Pattern Anal Mach Intell ; 29(10): 1853-8, 2007 Oct.
Article in English | MEDLINE | ID: mdl-17699928

ABSTRACT

This paper presents a new affine-invariant matching algorithm based on B-Spline modeling, which solves the problem of the non-uniqueness of B-Spline in curve matching. This method first smoothes the B-Spline curve by increasing the degree of the curve. It is followed by a reduction of the curve degree using the Least Square Error (LSE) approach to construct the Curvature Scale Space (CSS) image. CSS matching is then carried out. Our method combines the advantages of B-Spline that are continuous curve representation and the robustness of CSS matching with respect to noise and affine transformation. It avoids the need for other matching algorithms that have to use the re-sampled points on the curve. Thus, the curve matching error is reduced. The proposed algorithm has been tested by matching similar shapes from a prototype database. The experimental results showed the robustness and accuracy of the proposed method in B-Spline curve matching.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Numerical Analysis, Computer-Assisted , Pattern Recognition, Automated/methods , Subtraction Technique , Computer Simulation , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
6.
Neural Netw ; 18(5-6): 585-94, 2005.
Article in English | MEDLINE | ID: mdl-16112550

ABSTRACT

In the tasks of image representation, recognition and retrieval, a 2D image is usually transformed into a 1D long vector and modelled as a point in a high-dimensional vector space. This vector-space model brings up much convenience and many advantages. However, it also leads to some problems such as the Curse of Dimensionality dilemma and Small Sample Size problem, and thus produces us a series of challenges, for example, how to deal with the problem of numerical instability in image recognition, how to improve the accuracy and meantime to lower down the computational complexity and storage requirement in image retrieval, and how to enhance the image quality and meanwhile to reduce the transmission time in image transmission, etc. In this paper, these problems are solved, to some extent, by the proposed Generalized 2D Principal Component Analysis (G2DPCA). G2DPCA overcomes the limitations of the recently proposed 2DPCA (Yang et al., 2004) from the following aspects: (1) the essence of 2DPCA is clarified and the theoretical proof why 2DPCA is better than Principal Component Analysis (PCA) is given; (2) 2DPCA often needs much more coefficients than PCA in representing an image. In this work, a Bilateral-projection-based 2DPCA (B2DPCA) is proposed to remedy this drawback; (3) a Kernel-based 2DPCA (K2DPCA) scheme is developed and the relationship between K2DPCA and KPCA (Scholkopf et al., 1998) is explored. Experimental results in face image representation and recognition show the excellent performance of G2DPCA.


Subject(s)
Face , Image Processing, Computer-Assisted/statistics & numerical data , Principal Component Analysis , Algorithms , Databases, Factual , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...