Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 23(2)2023 Jan 14.
Article in English | MEDLINE | ID: mdl-36679775

ABSTRACT

Most well-established eye-tracking research paradigms adopt remote systems, which typically feature regular flat screens of limited width. Limitations of current eye-tracking methods over a wide area include calibration, the significant loss of data due to head movements, and the reduction of data quality over the course of an experimental session. Here, we introduced a novel method of tracking gaze and head movements that combines the possibility of investigating a wide field of view and an offline calibration procedure to enhance the accuracy of measurements. A 4-camera Smart Eye Pro system was adapted for infant research to detect gaze movements across 126° of the horizontal meridian. To accurately track this visual area, an online system calibration was combined with a new offline gaze calibration procedure. Results revealed that the proposed system successfully tracked infants' head and gaze beyond the average screen size. The implementation of an offline calibration procedure improved the validity and spatial accuracy of measures by correcting a systematic top-right error (1.38° mean horizontal error and 1.46° mean vertical error). This approach could be critical for deriving accurate physiological measures from the eye and represents a substantial methodological advance for tracking looking behaviour across both central and peripheral regions. The offline calibration is particularly useful for work with developing populations, such as infants, and for people who may have difficulties in following instructions.


Subject(s)
Eye Movements , Visual Fields , Humans , Infant , Fixation, Ocular , Calibration , Head Movements/physiology
2.
J Exp Child Psychol ; 226: 105554, 2023 02.
Article in English | MEDLINE | ID: mdl-36208491

ABSTRACT

From 10 months of age, human infants start to understand the function of the eyes in the looking behavior of others to the point where they preferentially orient toward an object if the social partner has open eyes rather than closed eyes. Thus far, gaze following has been investigated in controlled laboratory paradigms. The current study investigated this early ability using a remote live testing procedure, testing infants in their everyday environment while manipulating whether the experimenter could or could not see some target objects. A total of 32 11- and 12-month-old infants' looking behavior was assessed, varying the experimenter's eye status condition (open eyes vs closed eyes) in a between-participant design. Results showed that infants followed the gaze of a virtual social partner and that they preferentially followed open eyes rather than closed eyes. These data generalize past laboratory findings to a noisier home environment and demonstrate gaze processing capacities of infants to a virtual partner interacting with the participants in a live setup.


Subject(s)
Attention , Infant Behavior , Infant , Humans , Fixation, Ocular
3.
Dev Psychobiol ; 64(4): e22274, 2022 05.
Article in English | MEDLINE | ID: mdl-35452547

ABSTRACT

Most fundamental aspects of information processing in infancy have been primarily investigated using simplified images centrally presented on computer displays. This approach lacks ecological validity as in reality the majority of visual information is presented across the visual field, over a range of eccentricities. Limited studies are present, however, about the extent and the characteristics of infant peripheral vision after 7 months of age. The present work investigates the limits of infant (9-month-olds) and adult visual fields using a detection task. Gabor patches were presented at one of six eccentricities per hemifield, from 35° up to 60° in the left and right mid-peripheral visual fields. Detection rates at different eccentricities were measured from video recordings (infant sample) or key press responses (adult sample). Infant performance declined below chance level beyond 50°, whereas adults performed at ceiling level across all eccentricities. The performance of 9-month-olds was unequal even within 50°, suggesting regions of differential sensitivity to low-level visual information in the infant's periphery. These findings are key to understanding the limits of visual fields in the infant and, in turn, will inform the design of future infant studies.


Subject(s)
Visual Fields , Visual Perception , Adult , Cognition , Humans , Infant , Visual Perception/physiology
4.
Brain Sci ; 12(4)2022 Apr 13.
Article in English | MEDLINE | ID: mdl-35448024

ABSTRACT

Human infants are highly sensitive to social information in their visual world. In laboratory settings, researchers have mainly studied the development of social information processing using faces presented on standard computer displays, in paradigms exploring face-to-face, direct eye contact social interactions. This is a simplification of a richer visual environment in which social information derives from the wider visual field and detection involves navigating the world with eyes, head and body movements. The present study measured 9-month-old infants' sensitivities to face-like configurations across mid-peripheral visual areas using a detection task. Upright and inverted face-like stimuli appeared at one of three eccentricities (50°, 55° or 60°) in the left and right hemifields. Detection rates at different eccentricities were measured from video recordings. Results indicated that infant performance was heterogeneous and dropped beyond 55°, with a marginal advantage for targets appearing in the left hemifield. Infants' orienting behaviour was not influenced by the orientation of the target stimulus. These findings are key to understanding how face stimuli are perceived outside foveal regions and are informative for the design of infant paradigms involving stimulus presentation across a wider field of view, in more naturalistic visual environments.

5.
J Vis ; 19(11): 2, 2019 09 03.
Article in English | MEDLINE | ID: mdl-31480073

ABSTRACT

Research has shown that participants can extract the average facial expression from a set of faces when these were presented at fixation. In this study, we investigated whether this performance would be modulated by eccentricity given that neural resources are limited outside the foveal region. We also examined whether or not there would be compulsory averaging in the parafovea as has been previously reported for the orientation of Gabor patches by Parkes, Lund, Angelucci, Solomon, and Morgan (2001). Participants were presented with expressive faces (alone or in sets of nine, at fixation or at 3° to the left or right) and were asked to identify the expression of the central target face or to estimate the average expression of the set. Our results revealed that, although participants were able to extract average facial expressions in central and parafoveal conditions, their performance was superior in the parafovea, suggesting facilitated averaging outside the fovea by peripheral mechanisms. Furthermore, regardless of whether the task was to judge the expression of the central target or set average, participants had a tendency to identify central targets' expressions in the fovea but were compelled to average in the parafovea, a finding consistent with compulsory averaging. The data also supported averaging over substitution models of crowding. We conclude that the ability to extract average expressions in sets of faces and identify single targets' facial expressions is influenced by eccentricity.


Subject(s)
Facial Expression , Facial Recognition/physiology , Adult , Analysis of Variance , Emotions/physiology , Female , Fixation, Ocular/physiology , Fovea Centralis/physiology , Humans , Male , Models, Biological , Orientation/physiology , Young Adult
6.
J Vis ; 19(1): 9, 2019 01 02.
Article in English | MEDLINE | ID: mdl-30650432

ABSTRACT

We have been developing a computational visual difference predictor model that can predict how human observers rate the perceived magnitude of suprathreshold differences between pairs of full-color naturalistic scenes (To, Lovell, Troscianko, & Tolhurst, 2010). The model is based closely on V1 neurophysiology and has recently been updated to more realistically implement sequential application of nonlinear inhibitions (contrast normalization followed by surround suppression; To, Chirimuuta, & Tolhurst, 2017). The model is based originally on a reliable luminance model (Watson & Solomon, 1997) which we have extended to the red/green and blue/yellow opponent planes, assuming that the three planes (luminance, red/green, and blue/yellow) can be modeled similarly to each other with narrow-band oriented filters. This paper examines whether this may be a false assumption, by decomposing our original full-color stimulus images into monochromatic and isoluminant variants, which observers rate separately and which we model separately. The ratings for the original full-color scenes correlate better with the new ratings for the monochromatic variants than for the isoluminant ones, suggesting that luminance cues carry more weight in observers' ratings to full-color images. The ratings for the original full-color stimuli can be predicted from the new monochromatic and isoluminant rating data by combining them by Minkowski summation with power m = 2.71, consistent with other studies involving feature summation. The model performed well at predicting ratings for monochromatic stimuli, but was weaker for isoluminant stimuli, indicating that mirroring the monochromatic models is not sufficient to model the color planes. We discuss several alternative strategies to improve the color modeling.


Subject(s)
Color Perception/physiology , Contrast Sensitivity/physiology , Vision, Ocular/physiology , Cues , Humans , Orientation, Spatial/physiology , Sensory Thresholds/physiology
7.
J Vis ; 17(12): 23, 2017 10 01.
Article in English | MEDLINE | ID: mdl-29090318

ABSTRACT

We consider the role of nonlinear inhibition in physiologically realistic multineuronal models of V1 to predict the dipper functions from contrast discrimination experiments with sinusoidal gratings of different geometries. The dip in dipper functions has been attributed to an expansive transducer function, which itself is attributed to two nonlinear inhibitory mechanisms: contrast normalization and surround suppression. We ran five contrast discrimination experiments, with targets and masks of different sizes and configurations: small Gabor target/small mask, small target/large mask, large target/large mask, small target/in-phase annular mask, and small target/out-of-phase annular mask. Our V1 modeling shows that the results for small Gabor target/small mask, small target/large mask, large target/large mask configurations are easily explained only if the model includes surround suppression. This is compatible with the finding that an in-phase annular mask generates only little threshold elevation while the out-of-phase mask was more effective. Surrounding mask gratings cannot be equated with surround suppression at the receptive-field level. We examine whether normalization and surround suppression occur simultaneously (parallel model) or sequentially (a better reflection of neurophysiology). The Akaike Criterion Difference showed that the sequential model was better than the parallel, but the difference was small. The large target/large mask dipper experiment was not well fit by our models, and we suggest that this may reflect selective attention for its uniquely larger test stimulus. The best-fit model replicates some behaviors of single V1 neurons, such as the decrease in receptive-field size with increasing contrast.


Subject(s)
Contrast Sensitivity/physiology , Discrimination, Psychological/physiology , Models, Neurological , Neurons/physiology , Retina/physiology , Visual Cortex/physiology , Humans , Photic Stimulation/methods
8.
J Vis ; 15(1): 15.1.19, 2015 Jan 16.
Article in English | MEDLINE | ID: mdl-25595273

ABSTRACT

We investigate whether a computational model of V1 can predict how observers rate perceptual differences between paired movie clips of natural scenes. Observers viewed 198 pairs of movies clips, rating how different the two clips appeared to them on a magnitude scale. Sixty-six of the movie pairs were naturalistic and those remaining were low-pass or high-pass spatially filtered versions of those originals. We examined three ways of comparing a movie pair. The Spatial Model compared corresponding frames between each movie pairwise, combining those differences using Minkowski summation. The Temporal Model compared successive frames within each movie, summed those differences for each movie, and then compared the overall differences between the paired movies. The Ordered-Temporal Model combined elements from both models, and yielded the single strongest predictions of observers' ratings. We modeled naturalistic sustained and transient impulse functions and compared frames directly with no temporal filtering. Overall, modeling naturalistic temporal filtering improved the models' performance; in particular, the predictions of the ratings for low-pass spatially filtered movies were much improved by employing a transient impulse function. The correlations between model predictions and observers' ratings rose from 0.507 without temporal filtering to 0.759 (p = 0.01%) when realistic impulses were included. The sustained impulse function and the Spatial Model carried more weight in ratings for normal and high-pass movies, whereas the transient impulse function with the Ordered-Temporal Model was most important for spatially low-pass movies. This is consistent with models in which high spatial frequency channels with sustained responses primarily code for spatial details in movies, while low spatial frequency channels with transient responses code for dynamic events.


Subject(s)
Models, Neurological , Space Perception/physiology , Visual Perception/physiology , Attention/physiology , Eye Movements/physiology , Humans , Psychophysics
9.
J Vis ; 10(4): 12.1-22, 2010 Apr 27.
Article in English | MEDLINE | ID: mdl-20465332

ABSTRACT

Simple everyday tasks, such as visual search, require a visual system that is sensitive to differences. Here we report how observers perceive changes in natural image stimuli, and what happens if objects change color, position, or identity-i.e., when the external scene changes in a naturalistic manner. We investigated whether a V1-based difference-prediction model can predict the magnitude ratings given by observers to suprathreshold differences in numerous pairs of natural images. The model incorporated contrast normalization and surround suppression, and elongated receptive-fields. Observers' ratings were better predicted when the model included phase invariance, and even more so when the stimuli were inverted and negated to lessen their semantic impact. Some feature changes were better predicted than others: the model systematically underpredicted observers' perception of the magnitude of blur, but over-predicted their ability to report changes in textures.


Subject(s)
Color Perception/physiology , Form Perception/physiology , Models, Neurological , Sensory Thresholds/physiology , Contrast Sensitivity/physiology , Humans , Motion Perception/physiology , Nonlinear Dynamics , Photic Stimulation/methods , Predictive Value of Tests , Psychophysics , Visual Fields/physiology
10.
Seeing Perceiving ; 23(4): 349-72, 2010.
Article in English | MEDLINE | ID: mdl-21466148

ABSTRACT

We are studying how people perceive naturalistic suprathreshold changes in the colour, size, shape or location of items in images of natural scenes, using magnitude estimation ratings to characterise the sizes of the perceived changes in coloured photographs. We have implemented a computational model that tries to explain observers' ratings of these naturalistic differences between image pairs. We model the action-potential firing rates of millions of neurons, having linear and non-linear summation behaviour closely modelled on real VI neurons. The numerical parameters of the model's sigmoidal transducer function are set by optimising the same model to experiments on contrast discrimination (contrast 'dippers') on monochrome photographs of natural scenes. The model, optimised on a stimulus-intensity domain in an experiment reminiscent of the Weber-Fechner relation, then produces tolerable predictions of the ratings for most kinds of naturalistic image change. Importantly, rating rises roughly linearly with the model's numerical output, which represents differences in neuronal firing rate in response to the two images under comparison; this implies that rating is proportional to the neuronal response.


Subject(s)
Contrast Sensitivity/physiology , Discrimination, Psychological/physiology , Memory/physiology , Models, Theoretical , Neurons/physiology , Humans , Photic Stimulation
SELECTION OF CITATIONS
SEARCH DETAIL
...