Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
IEEE Trans Pattern Anal Mach Intell ; 42(4): 865-879, 2020 Apr.
Article in English | MEDLINE | ID: mdl-30629493

ABSTRACT

From a single RGB image of an unknown face, taken under unknown conditions, we estimate a physically plausible lighting model. First, the 3D geometry and texture of the face are estimated by fitting a 3D Morphable Model to the 2D input. With this estimated 3D model and a Virtual Light Stage (VLS), we generate a gallery of images of the face with all the same conditions, but different lighting. We consider non-lambertian reflectance and non-convex geometry to handle more realistic illumination effects in complex lighting conditions. Our hierarchical Bayesian approach automatically suppresses inconsistencies between the model and the input. It estimates the RGB values for the light sources of a VLS to reconstruct the input face with the estimated 3D face model. We discuss the relevance of the hierarchical approach to this minimally constrained inverse rendering problem and show how the hyperparameters can be controlled to improve the results of the algorithm for complex effects, such as cast shadows. Our algorithm is a contribution to single image face modeling and analysis, provides information about the imaging condition and facilitates realistic reconstruction of the input image, relighting, lighting transfer and lighting design.

2.
J Rehabil Assist Technol Eng ; 3: 2055668316674582, 2016.
Article in English | MEDLINE | ID: mdl-31186914

ABSTRACT

We present PictureSensation, a mobile application for the hapto-acoustic exploration of images. It is designed to allow for the visually impaired to gain direct perceptual access to images via an acoustic signal. PictureSensation introduces a swipe-gesture based, speech-guided, barrier free user interface to guarantee autonomous usage by a blind user. It implements a recently proposed exploration and audification principle, which harnesses exploration methods that the visually impaired are used to from everyday life. In brief, a user explores an image actively on a touch screen and receives auditory feedback about its content at his current finger position. PictureSensation provides an extensive tutorial and training mode, to allow for a blind user to become familiar with the use of the application itself as well as the principles of image content to sound transformations, without any assistance from a normal-sighted person. We show our application's potential to help visually impaired individuals explore, interpret and understand entire scenes, even on small smartphone screens. Providing more than just verbal scene descriptions, PictureSensation presents a valuable mobile tool to grant the blind access to the visual world through exploration, anywhere.

3.
Neuroimage Clin ; 2: 320-31, 2013.
Article in English | MEDLINE | ID: mdl-24179786

ABSTRACT

Individuals with Autism Spectrum Disorder (ASD) appear to show a general face discrimination deficit across a range of tasks including social-emotional judgments as well as identification and discrimination. However, functional magnetic resonance imaging (fMRI) studies probing the neural bases of these behavioral differences have produced conflicting results: while some studies have reported reduced or no activity to faces in ASD in the Fusiform Face Area (FFA), a key region in human face processing, others have suggested more typical activation levels, possibly reflecting limitations of conventional fMRI techniques to characterize neuron-level processing. Here, we test the hypotheses that face discrimination abilities are highly heterogeneous in ASD and are mediated by FFA neurons, with differences in face discrimination abilities being quantitatively linked to variations in the estimated selectivity of face neurons in the FFA. Behavioral results revealed a wide distribution of face discrimination performance in ASD, ranging from typical performance to chance level performance. Despite this heterogeneity in perceptual abilities, individual face discrimination performance was well predicted by neural selectivity to faces in the FFA, estimated via both a novel analysis of local voxel-wise correlations, and the more commonly used fMRI rapid adaptation technique. Thus, face processing in ASD appears to rely on the FFA as in typical individuals, differing quantitatively but not qualitatively. These results for the first time mechanistically link variations in the ASD phenotype to specific differences in the typical face processing circuit, identifying promising targets for interventions.

4.
Clin Endocrinol (Oxf) ; 75(2): 226-31, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21521289

ABSTRACT

BACKGROUND: Early diagnosis of a number of endocrine diseases is theoretically possible by the examination of facial photographs. One of these is acromegaly. If acromegaly were found, early in the course of the disease, morbidity would be lessened and cures more likely. OBJECTIVES, DESIGN, PATIENTS, MEASUREMENTS: Our objective was to develop a computer program which would separate 24 facial photographs, of patients with acromegaly, from those of 25 normal subjects. The key to doing this was to use a previously developed database that consisted of three-dimensional representations of 200 normal person's heads (SIGGRAPH '99 Conference Proceedings, 1999). We transformed our 49, two-dimensional photos into three-dimensional constructs and then, using the computer program, attempted to separate them into those with and without the features of acromegaly. We compared the accuracy of the computer to that of 10 generalist physicians. A second objective was to examine, by a subjective analysis, the features of acromegaly in the normal subjects of our photographic database. RESULTS: The accuracy of the computer model was 86%; the average of the 10 physicians was 26%. The worst individual physician, 16%, the best, 90%. The faces of 200 normal subjects, the original faces in the database, could be divided into four groups, averaged by computer, from those with fewer to those with more features of acromegaly. CONCLUSIONS: The present computer model can sort photographs of patients with acromegaly from photographs of normal subjects and is much more accurate than the sorting by practicing generalists. Even normal subjects have some of the features of acromegaly. Screening with this approach can be improved with automation of the procedure, software development and the identification of target populations in which the prevalence of acromegaly may be increased over that in the general population.


Subject(s)
Acromegaly/diagnosis , Diagnosis, Computer-Assisted/standards , Early Diagnosis , Face , Humans , Maxillofacial Development , Photography , Physicians , Sensitivity and Specificity , Software
5.
J Cogn Neurosci ; 22(7): 1570-82, 2010 Jul.
Article in English | MEDLINE | ID: mdl-19642884

ABSTRACT

We examined the neural response patterns for facial identity independent of viewpoint and for viewpoint independent of identity. Neural activation patterns for identity and viewpoint were collected in an fMRI experiment. Faces appeared in identity-constant blocks, with variable viewpoint, and in viewpoint-constant blocks, with variable identity. Pattern-based classifiers were used to discriminate neural response patterns for all possible pairs of identities and viewpoints. To increase the likelihood of detecting distinct neural activation patterns for identity, we tested maximally dissimilar "face"-"antiface" pairs and normal face pairs. Neural response patterns for four of six identity pairs, including the "face"-"antiface" pairs, were discriminated at levels above chance. A behavioral experiment showed accord between perceptual and neural discrimination, indicating that the classifier tapped a high-level visual identity code. Neural activity patterns across a broad span of ventral temporal (VT) cortex, including fusiform gyrus and lateral occipital areas (LOC), were required for identity discrimination. For viewpoint, five of six viewpoint pairs were discriminated neurally. Viewpoint discrimination was most accurate with a broad span of VT cortex, but the neural and perceptual discrimination patterns differed. Less accurate discrimination of viewpoint, more consistent with human perception, was found in right posterior superior temporal sulcus, suggesting redundant viewpoint codes optimized for different functions. This study provides the first evidence that it is possible to dissociate neural activation patterns for identity and viewpoint independently.


Subject(s)
Brain Mapping/psychology , Discrimination, Psychological/physiology , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Adult , Face , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Photic Stimulation , Young Adult
6.
Neuroimage ; 47(4): 1809-18, 2009 Oct 01.
Article in English | MEDLINE | ID: mdl-19497375

ABSTRACT

The human brain recognizes faces by means of two main diagnostic sources of information: three-dimensional (3D) shape and two-dimensional (2D) surface reflectance. Here we used event-related potentials (ERPs) in a face adaptation paradigm to examine the time-course of processing for these two types of information. With a 3D morphable model, we generated pairs of faces that were either identical, varied in 3D shape only, in 2D surface reflectance only, or in both. Sixteen human observers discriminated individual faces in these 4 types of pairs, in which a first (adapting) face was followed shortly by a second (test) face. Behaviorally, observers were as accurate and as fast for discriminating individual faces based on either 3D shape or 2D surface reflectance alone, but were faster when both sources of information were present. As early as the face-sensitive N170 component (approximately 160 ms following the test face), there was larger amplitude for changes in 3D shape relative to the repetition of the same face, especially over the right occipito-temporal electrodes. However, changes in 2D reflectance between the adapter and target face did not increase the N170 amplitude. At about 250 ms, both 3D shape and 2D reflectance contributed equally, and the largest difference in amplitude compared to the repetition of the same face was found when both 3D shape and 2D reflectance were combined, in line with observers' behavior. These observations indicate that evidence to recognize individual faces accumulate faster in the right hemisphere human visual cortex from diagnostic 3D shape information than from 2D surface reflectance information.


Subject(s)
Electroencephalography/methods , Evoked Potentials, Visual/physiology , Face , Mental Recall/physiology , Pattern Recognition, Visual/physiology , Task Performance and Analysis , Visual Cortex/physiology , Adult , Female , Humans , Imaging, Three-Dimensional , Male
7.
Psychol Sci ; 20(3): 318-25, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19207696

ABSTRACT

Representations of individual faces evolve with experience to support progressively more robust recognition. Knowledge of three-dimensional face structure is required to predict an image of a face as illumination and viewpoint change. Robust recognition across such transformations can be achieved with representations based on multiple two-dimensional views, three-dimensional structure, or both. We used face-identity adaptation in a familiarization paradigm to address a long-standing controversy concerning the role of two-dimensional versus three-dimensional information in face representations. We reasoned that if three-dimensional information is coded in the representations of familiar faces, then learning a new face using images generated by one three-dimensional transformation should enhance the robustness of the representation to another type of three-dimensional transformation. Familiarization with multiple views of faces enhanced the transfer of face-identity adaptation effects across changes in illumination by compensating for a generalization cost at a novel test viewpoint. This finding demonstrates a role for three-dimensional information in representations of familiar faces.


Subject(s)
Disclosure , Facial Expression , Social Identification , Humans , Recognition, Psychology
8.
J Vis ; 8(3): 20.1-15, 2008 Mar 24.
Article in English | MEDLINE | ID: mdl-18484826

ABSTRACT

Humans typically have a remarkable memory for faces. Nonetheless, in some cases they can be fooled. Experiments described in this paper provide new evidence for an effect in which observers falsely "recognize" a face that they have never seen before. The face is a chimera (prototype) built from parts extracted from previously viewed faces. It is known that faces of this kind can be confused with truly familiar faces, a result referred to as the prototype effect. However, recent studies have failed to find evidence for a full effect, one in which the prototype is regarded not only as familiar, but as more familiar than faces which have been seen before. This study sought to reinvestigate the effect. In a pair of experiments, evidence is reported for the full effect based on both an old/new discrimination task and a familiarity ranking task. The results are shown to be consistent with a recognition model in which faces are represented as combinations of reusable, abstract features. In a final experiment, novel predictions of the model are verified by comparing the size of the prototype effect for upright and upside-down faces. Despite the fundamentally piecewise nature of the model, an explanation is provided as to how it can also account for the sensitivity of observers to configural and holistic cues. This discussion is backed up with the use of an unsupervised network model. Overall, the paper describes how an abstract feature-based model can reconcile a range of results in the face recognition literature and, in turn, lessen currently perceived differences between the representation of faces and other objects.


Subject(s)
Computer Simulation , Facial Expression , Form Perception/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Humans , Photic Stimulation , Psychological Tests
9.
Vision Res ; 47(4): 525-31, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17207832

ABSTRACT

Recent studies show that face adaptation effects partially transfer across three-dimensional viewpoint change. Here we investigated whether the degree of adaptation transfer is mediated by experience with a face. We manipulated face familiarity and measured identity aftereffects both within- and across-viewpoint. Familiarity enhanced the overall strength of identity adaptation as well as the degree to which adaptation transferred across-viewpoint change. These findings support the idea that transfer effects in adaptation vary as a function of experience with particular faces, and suggest the use of adaptation as a tool for tracking face representations as they develop.


Subject(s)
Face , Pattern Recognition, Visual , Recognition, Psychology , Adaptation, Physiological , Adult , Figural Aftereffect , Humans , Male , Photic Stimulation/methods , Psychophysics , Transfer, Psychology
10.
Psychol Sci ; 17(6): 493-500, 2006 Jun.
Article in English | MEDLINE | ID: mdl-16771799

ABSTRACT

Sensory adaptation and visual aftereffects have long given insight into the neural codes underlying basic dimensions of visual perception. Recently discovered perceptual adaptation effects for complex shapes like faces can offer similar insight into high-level visual representations. In the experiments reported here, we demonstrated first that face adaptation transfers across a substantial change in viewpoint and that this transfer occurs via processes unlikely to be specific to faces. Next, we probed the visual codes underlying face recognition using face morphs that varied selectively in reflectance or shape. Adaptation to these morphs affected the perception of "opposite" faces both from the same viewpoint and from a different viewpoint. These results are consistent with high-level face representations that pool local shape and reflectance patterns into configurations that specify facial appearance over a range of three-dimensional viewpoints. These findings have implications for computational models of face recognition and for competing neural theories of face and object recognition.


Subject(s)
Face , Form Perception , Visual Perception , Discrimination, Psychological , Facial Expression , Humans , Judgment , Recognition, Psychology
11.
Neuron ; 50(1): 159-72, 2006 Apr 06.
Article in English | MEDLINE | ID: mdl-16600863

ABSTRACT

Understanding the neural mechanisms underlying object recognition is one of the fundamental challenges of visual neuroscience. While neurophysiology experiments have provided evidence for a "simple-to-complex" processing model based on a hierarchy of increasingly complex image features, behavioral and fMRI studies of face processing have been interpreted as incompatible with this account. We present a neurophysiologically plausible, feature-based model that quantitatively accounts for face discrimination characteristics, including face inversion and "configural" effects. The model predicts that face discrimination is based on a sparse representation of units selective for face shapes, without the need to postulate additional, "face-specific" mechanisms. We derive and test predictions that quantitatively link model FFA face neuron tuning, neural adaptation measured in an fMRI rapid adaptation paradigm, and face discrimination performance. The experimental data are in excellent agreement with the model prediction that discrimination performance should asymptote as faces become dissimilar enough to activate different neuronal populations.


Subject(s)
Cerebral Cortex , Discrimination, Psychological/physiology , Face , Magnetic Resonance Imaging , Models, Neurological , Pattern Recognition, Visual/physiology , Adolescent , Adult , Brain Mapping , Cerebral Cortex/blood supply , Cerebral Cortex/physiology , Female , Field Dependence-Independence , Generalization, Psychological/physiology , Humans , Male , Models, Psychological , Oxygen/blood , Photic Stimulation/methods , Predictive Value of Tests , Psychophysics/methods , Recognition, Psychology
12.
Med Image Comput Comput Assist Interv ; 9(Pt 2): 495-503, 2006.
Article in English | MEDLINE | ID: mdl-17354809

ABSTRACT

Acromegaly is a rare disorder which affects about 50 of every million people. The disease typically causes swelling of the hands, feet, and face, and eventually permanent changes to areas such as the jaw, brow ridge, and cheek bones. The disease is often missed by physicians and progresses beyond where it might if it were identified and treated earlier. We consider a semi-automated approach to detecting acromegaly, using a novel combination of support vector machines (SVMs) and a morphable model. Our training set consists of 24 frontal photographs of acromegalic patients and 25 of disease-free subjects. We modelled each subject's face in an analysis-by-synthesis loop using the three-dimensional morphable face model of Blanz and Vetter. The model parameters capture many features of the 3D shape of the subject's head from just a single photograph, and are used directly for classification. We report encouraging results of a classifier built from the training set of real human subjects.


Subject(s)
Acromegaly/pathology , Cephalometry/methods , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Photography/methods , Algorithms , Artificial Intelligence , Computer Simulation , Humans , Image Enhancement/methods , Mass Screening/methods , Models, Biological , Reproducibility of Results , Sensitivity and Specificity
13.
Eur J Oral Sci ; 113(4): 333-40, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16048526

ABSTRACT

A mathematical representation of tooth morphology may help to improve and automate restorative computer-aided design processes, virtual dental education, and parametric morphology. However, to date, no quantitative formulation has been identified for the description of dental features. The aim of this study was to establish and to validate a mathematical process for describing the morphology of first lower molars. Stone replicas of 170 caries-free first lower molars from young patients were measured three-dimensionally with a resolution of about 100,000 points. First, the average tooth was computed, which captures the common features of the molar's surface quantitatively. For this, the crucial step was to establish a dense point-to-point correspondence between all teeth. The algorithm did not involve any prior knowledge about teeth. In a second step, principal component analysis was carried out. Repeated for 3 different reference teeth, the procedure yielded average teeth that were nearly independent of the reference (less than +/- 40 microm). Additionally, the results indicate that only a few principal components determine a high percentage of the three-dimensional shape variability of first lower molars (e.g. the first five principal components describe 52% of the total variance, the first 10 principal components 72% and the first 20 principal components 83%). With the novel approach presented in this paper, surfaces of teeth can be described efficiently in terms of only a few parameters. This mathematical representation is called the 'biogeneric tooth'.


Subject(s)
Computer-Aided Design , Dental Prosthesis Design/methods , Models, Biological , Models, Dental , Molar/anatomy & histology , Algorithms , Humans , Image Processing, Computer-Assisted , Imaging, Three-Dimensional , Principal Component Analysis
14.
IEEE Trans Vis Comput Graph ; 11(3): 296-305, 2005.
Article in English | MEDLINE | ID: mdl-15868829

ABSTRACT

In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.


Subject(s)
Algorithms , Artificial Intelligence , Face/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Models, Biological , Pattern Recognition, Automated/methods , Computer Graphics , Computer Simulation , Humans , Image Enhancement/methods , Information Storage and Retrieval/methods , Numerical Analysis, Computer-Assisted , Photometry/methods , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted , Subtraction Technique , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...