Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Cybern ; 47(9): 2544-2555, 2017 Sep.
Article in English | MEDLINE | ID: mdl-27483493

ABSTRACT

In a recent work, the effectiveness of neighborhood supported model level fuzzy aggregation was shown under dynamic background conditions. The multi-feature fuzzy aggregation used in that approach uses real fuzzy similarity values, and is robust for low and medium-scale dynamic background conditions such as swaying vegetation, sprinkling water, etc. The technique, however, exhibited some limitations under heavily dynamic background conditions, as features have high uncertainty under such noisy conditions and these uncertainties were not captured by real fuzzy similarity values. Our proposed algorithm is particularly focused toward improving the detection under heavy dynamic background conditions by modeling uncertainties in the data by interval-valued fuzzy set. In this paper, real-valued fuzzy aggregation has been extended to interval-valued fuzzy aggregation by considering uncertainties over real similarity values. We build up a procedure to calculate the uncertainty that varies for each feature, at each pixel, and at each time instant. We adaptively determine membership values at each pixel by the Gaussian of uncertainty value instead of fixed membership values used in recent fuzzy approaches, thereby, giving importance to a feature based on its uncertainty. Interval-valued Choquet integral is evaluated using interval similarity values and the membership values in order to calculate interval-valued fuzzy similarity between model and current. Adequate qualitative and quantitative studies are carried out to illustrate the effectiveness of the proposed method in mitigating heavily dynamic background situations as compared to state-of-the-art.

2.
IEEE Trans Image Process ; 24(9): 2701-11, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25872210

ABSTRACT

Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.


Subject(s)
Biometric Identification/methods , Emotions/classification , Face/physiology , Facial Expression , Image Processing, Computer-Assisted/methods , Algorithms , Databases, Factual , Humans , Models, Statistical
3.
IEEE Trans Image Process ; 23(2): 645-57, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24235250

ABSTRACT

We propose a new algorithm for moving object detection in the presence of challenging dynamic background conditions. We use a set of fuzzy aggregated multifeature similarity measures applied on multiple models corresponding to multimodal backgrounds. The algorithm is enriched with a neighborhood-supported model initialization strategy for faster convergence. A model level fuzzy aggregation measure driven background model maintenance ensures more robustness. Similarity functions are evaluated between the corresponding elements of the current feature vector and the model feature vectors. Concepts from Sugeno and Choquet integrals are incorporated in our algorithm to compute fuzzy similarities from the ordered similarity function values for each model. Model updating and the foreground/background classification decision is based on the set of fuzzy integrals. Our proposed algorithm is shown to outperform other multi-model background subtraction algorithms. The proposed approach completely avoids explicit offline training to initialize background model and can be initialized with moving objects also. The feature space uses a combination of intensity and statistical texture features for better object localization and robustness. Our qualitative and quantitative studies illustrate the mitigation of varieties of challenging situations by our approach.

4.
IEEE Trans Cybern ; 44(6): 870-81, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24108723

ABSTRACT

In this paper, we examine the suitability of correlogram for background subtraction, as a step towards moving object detection. Correlogram captures inter-pixel relationships in a region and is seen to be effective for modeling the dynamic backgrounds. A multi-channel correlogram is proposed using inter-channel and intra-channel correlograms to exploit full color information and the inter-pixel relations on the same color planes and across the planes. We thereafter derive a novel feature, termed multi-channel kernel fuzzy correlogram, composed by applying a fuzzy membership transformation over multi-channel correlogram. Multi-channel kernel fuzzy correlogram maps multi-channel correlogram into a reduced dimensionality space and is less sensitivity to noise. The approach handles multimodal distributions without using multiple models per pixel unlike traditional approaches. The approach does not require ideal background frames for background model initialization and can be initialized with moving objects also. Effectiveness of the proposed method is illustrated on different video sequences.

SELECTION OF CITATIONS
SEARCH DETAIL
...