Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Int Conf Autom Face Gesture Recognit ; 28(5): 807-813, 2010 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-20490373

RESUMO

A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE.

2.
Artigo em Inglês | MEDLINE | ID: mdl-25285316

RESUMO

Automatically recognizing pain from video is a very useful application as it has the potential to alert carers to patients that are in discomfort who would otherwise not be able to communicate such emotion (i.e young children, patients in postoperative care etc.). In previous work [1], a "pain-no pain" system was developed which used an AAM-SVM approach to good effect. However, as with any task involving a large amount of video data, there are memory constraints that need to be adhered to and in the previous work this was compressing the temporal signal using K-means clustering in the training phase. In visual speech recognition, it is well known that the dynamics of the signal play a vital role in recognition. As pain recognition is very similar to the task of visual speech recognition (i.e. recognising visual facial actions), it is our belief that compressing the temporal signal reduces the likelihood of accurately recognising pain. In this paper, we show that by compressing the spatial signal instead of the temporal signal, we achieve better pain recognition. Our results show the importance of the temporal signal in recognizing pain, however, we do highlight some problems associated with doing this due to the randomness of a patient's facial actions.

3.
IEEE Workshop Multimed Signal Proc ; 2008: 337-342, 2008 Oct 08.
Artigo em Inglês | MEDLINE | ID: mdl-20689666

RESUMO

A common problem that affects object alignment algorithms is when they have to deal with objects with unseen intra-class appearance variation. Several variants based on gradient-decent algorithms, such as the Lucas-Kanade (or forward-additive) and inverse-compositional algorithms, have been proposed to deal with this issue by solving for both alignment and appearance simultaneously. In [1], Baker and Matthews showed that without appearance variation, the inverse-compositional (IC) algorithm was theoretically and empirically equivalent to the forward-additive (FA) algorithm, whilst achieving significant improvement in computational efficiency. With appearance variation, it would be intuitive that a similar benefit of the IC algorithm would be experienced over the FA counterpart. However, to date no such comparison has been performed. In this paper we remedy this situation by performing such a comparison. In this comparison we show that the two algorithms are not equivalent due to the inclusion of the appearance variation parameters. Through a number of experiments on the MultiPIE face database, we show that we can gain greater refinement using the FA algorithm due to it being a truer solution than the IC approach.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...