Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
Behav Res Methods ; 55(5): 2485-2500, 2023 08.
Article in English | MEDLINE | ID: mdl-36002623

ABSTRACT

The ability to rapidly recognize words and link them to referents is central to children's early language development. This ability, often called word recognition in the developmental literature, is typically studied in the looking-while-listening paradigm, which measures infants' fixation on a target object (vs. a distractor) after hearing a target label. We present a large-scale, open database of infant and toddler eye-tracking data from looking-while-listening tasks. The goal of this effort is to address theoretical and methodological challenges in measuring vocabulary development. We first present how we created the database, its features and structure, and associated tools for processing and accessing infant eye-tracking datasets. Using these tools, we then work through two illustrative examples to show how researchers can use Peekbank to interrogate theoretical and methodological questions about children's developing word recognition ability.


Subject(s)
Eye-Tracking Technology , Language Development , Infant , Humans , Auditory Perception , Vocabulary
2.
Front Psychol ; 11: 1457, 2020.
Article in English | MEDLINE | ID: mdl-32793025

ABSTRACT

The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have limited ready-to-use methods and training for quantifying structures and patterns in behavioral time series. In this paper, we will introduce four techniques to interpret and analyze high-density multi-modal behavior data, namely, to: (1) visualize the raw time series, (2) describe the overall distributional structure of temporal events (Burstiness calculation), (3) characterize the non-linear dynamics over multiple timescales with Chromatic and Anisotropic Cross-Recurrence Quantification Analysis (CRQA), (4) and quantify the directional relations among a set of interdependent multimodal behavioral variables with Granger Causality. Each technique is introduced in a module with conceptual background, sample data drawn from empirical studies and ready-to-use Matlab scripts. The code modules showcase each technique's application with detailed documentation to allow more advanced users to adapt them to their own datasets. Additionally, to make our modules more accessible to beginner programmers, we provide a "Programming Basics" module that introduces common functions for working with behavioral timeseries data in Matlab. Together, the materials provide a practical introduction to a range of analyses that psychologists can use to discover temporal structure in high-density behavioral data.

3.
J Exp Child Psychol ; 179: 324-336, 2019 03.
Article in English | MEDLINE | ID: mdl-30579246

ABSTRACT

Sustained visual attention is a well-studied cognitive capacity that is relevant to many developmental outcomes. The development of visual attention is often construed as an increased capacity to exert top-down internal control. We demonstrate that sustained visual attention, measured in terms of momentary eye gaze, emerges from and is tightly tied to sensory-motor coordination. Specifically, we examined whether and how changes in manual behavior alter toddlers' eye gaze during toy play. We manipulated manual behavior by giving one group of children heavy toys that were hard to pick up and giving another group of children perceptually identical toys that were lighter and easy to pick up and hold. We found a tight temporal coupling of visual attention with the duration of manual activities on the objects, a relation that cannot be explained by interest alone. Toddlers in the heavy-object condition looked at objects as much as toddlers in the light-object condition but did so through many brief glances, whereas looks to the same objects were longer and sustained in the light-object condition. We explain the results based on the mechanism of hand-eye coordination and discuss its implications for the development of visual attention.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Visual Perception/physiology , Weight Perception/physiology , Child, Preschool , Female , Humans , Infant , Male , Play and Playthings
4.
J Vis Exp ; (141)2018 11 14.
Article in English | MEDLINE | ID: mdl-30507907

ABSTRACT

Young children's visual environments are dynamic, changing moment-by-moment as children physically and visually explore spaces and objects and interact with people around them. Head-mounted eye tracking offers a unique opportunity to capture children's dynamic egocentric views and how they allocate visual attention within those views. This protocol provides guiding principles and practical recommendations for researchers using head-mounted eye trackers in both laboratory and more naturalistic settings. Head-mounted eye tracking complements other experimental methods by enhancing opportunities for data collection in more ecologically valid contexts through increased portability and freedom of head and body movements compared to screen-based eye tracking. This protocol can also be integrated with other technologies, such as motion tracking and heart-rate monitoring, to provide a high-density multimodal dataset for examining natural behavior, learning, and development than previously possible. This paper illustrates the types of data generated from head-mounted eye tracking in a study designed to investigate visual attention in one natural context for toddlers: free-flowing toy play with a parent. Successful use of this protocol will allow researchers to collect data that can be used to answer questions not only about visual attention, but also about a broad range of other perceptual, cognitive, and social skills and their development.


Subject(s)
Attention , Data Collection/methods , Eye Movements , Video Recording , Child Behavior , Child, Preschool , Head , Humans , Infant , Movement
5.
Article in English | MEDLINE | ID: mdl-28966875

ABSTRACT

We focus on a fundamental looking behavior in human-robot interactions - gazing at each other's face. Eye contact and mutual gaze between two social partners are critical in smooth human-human interactions. Therefore, investigating at what moments and in what ways a robot should look at a human user's face as a response to the human's gaze behavior is an important topic. Toward this goal, we developed a gaze-contingent human-robot interaction system, which relied on momentary gaze behaviors from a human user to control an interacting robot in real time. Using this system, we conducted an experiment in which human participants interacted with the robot in a joint attention task. In the experiment, we systematically manipulated the robot's gaze toward the human partner's face in real time and then analyzed the human's gaze behavior as a response to the robot's gaze behavior. We found that more face looks from the robot led to more look-backs (to the robot's face) from human participants and consequently created more mutual gaze and eye contact between the two. Moreover, participants demonstrated more coordinated and synchronized multimodal behaviors between speech and gaze when more eye contact was successfully established and maintained.

6.
Infancy ; 17(1): 33-60, 2012 Jan.
Article in English | MEDLINE | ID: mdl-32693505

ABSTRACT

Infant eye movements are an important behavioral resource to understand early human development and learning. But the complexity and amount of gaze data recorded from state-of-the-art eye-tracking systems also pose a challenge: how does one make sense of such dense data? Toward this goal, this article describes an interactive approach based on integrating top-down domain knowledge with bottom-up information visualization and visual data mining. The key idea behind this method is to leverage the computational power of the human visual system. Thus, we propose an approach in which scientists iteratively examine and identify underlying patterns through data visualization and link those discovered patterns with top-down knowledge/hypotheses. Combining bottom-up data visualization with top-down human theoretical knowledge through visual data mining is an effective and efficient way to make discoveries from gaze data. We first provide an overview of the underlying principles of this new approach of human-in-the-loop knowledge discovery and then show several examples illustrating how this interactive exploratory approach can lead to new findings.

SELECTION OF CITATIONS
SEARCH DETAIL