Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Behav Res Methods ; 56(3): 1793-1816, 2024 Mar.
Article in English | MEDLINE | ID: mdl-37450220

ABSTRACT

In this study, we present TURead, an eye movement dataset of silent and oral sentence reading in Turkish, an agglutinative language with a shallow orthography understudied in reading research. TURead provides empirical data to investigate the relationship between morphology and oculomotor control. We employ a target-word approach in which target words are manipulated by word length and by the addition of two commonly used suffixes in Turkish. The dataset contains well-established eye movement variables; prelexical characteristics such as vowel harmony and bigram-trigram frequencies and word features, such as word length, predictability, frequency, eye voice span measures, Cloze test scores of the root word and suffix predictabilities, as well as the scores obtained from two working memory tests. Our findings on fixation parameters and word characteristics are in line with the patterns reported in the relevant literature.


Subject(s)
Eye Movements , Fixation, Ocular , Humans , Language , Memory, Short-Term , Reading
2.
J Vis Exp ; (198)2023 08 04.
Article in English | MEDLINE | ID: mdl-37677038

ABSTRACT

Perception of others' actions is crucial for survival, interaction, and communication. Despite decades of cognitive neuroscience research dedicated to understanding the perception of actions, we are still far away from developing a neurally inspired computer vision system that approaches human action perception. A major challenge is that actions in the real world consist of temporally unfolding events in space that happen "here and now" and are actable. In contrast, visual perception and cognitive neuroscience research to date have largely studied action perception through 2D displays (e.g., images or videos) that lack the presence of actors in space and time, hence these displays are limited in affording actability. Despite the growing body of knowledge in the field, these challenges must be overcome for a better understanding of the fundamental mechanisms of the perception of others' actions in the real world. The aim of this study is to introduce a novel setup to conduct naturalistic laboratory experiments with live actors in scenarios that approximate real-world settings. The core element of the setup used in this study is a transparent organic light-emitting diode (OLED) screen through which participants can watch the live actions of a physically present actor while the timing of their presentation is precisely controlled. In this work, this setup was tested in a behavioral experiment. We believe that the setup will help researchers reveal fundamental and previously inaccessible cognitive and neural mechanisms of action perception and will be a foundation for future studies investigating social perception and cognition in naturalistic settings.


Subject(s)
Cognitive Neuroscience , Psychology, Experimental , Humans , Cognition , Communication , Laboratories
3.
Cogn Sci ; 46(12): e13222, 2022 12.
Article in English | MEDLINE | ID: mdl-36515385

ABSTRACT

Cognitive science was established as an interdisciplinary domain of research in the 1970s. Since then, the domain has flourished, despite disputes concerning its interdisciplinarity. Multiple methods exist for the assessment of interdisciplinary research. The present study proposes a methodology for quantifying interdisciplinary aspects of research in cognitive science. We propose models for text similarity analysis that provide helpful information about the relationship between publications and their specific research fields, showing potential as a robust measure of interdisciplinarity. We designed and developed models utilizing the Doc2Vec method for analyzing cognitive science and related fields. Our findings reveal that cognitive science collaborates closely with most constituent disciplines. For instance, we found a balanced engagement between several constituent fields-including psychology, philosophy, linguistics, and computer science-that contribute significantly to cognitive science. On the other hand, anthropology and neuroscience have made limited contributions. In our analysis, we find that the scholarly domain of cognitive science has been exhibiting overt interdisciplinary for the past several decades.


Subject(s)
Interdisciplinary Studies , Neurosciences , Humans , Cognitive Science , Philosophy
4.
PLoS One ; 17(10): e0274480, 2022.
Article in English | MEDLINE | ID: mdl-36206273

ABSTRACT

We introduce a database (IDEST) of 250 short stories rated for valence, arousal, and comprehensibility in two languages. The texts, with a narrative structure telling a story in the first person and controlled for length, were originally written in six different languages (Finnish, French, German, Portuguese, Spanish, and Turkish), and rated for arousal, valence, and comprehensibility in the original language. The stories were translated into English, and the same ratings for the English translations were collected via an internet survey tool (N = 573). In addition to the rating data, we also report readability indexes for the original and English texts. The texts have been categorized into different story types based on their emotional arc. The texts score high on comprehensibility and represent a wide range of emotional valence and arousal levels. The comparative analysis of the ratings of the original texts and English translations showed that valence ratings were very similar across languages, whereas correlations between the two pairs of language versions for arousal and comprehensibility were modest. Comprehensibility ratings correlated with only some of the readability indexes. The database is published in osf.io/9tga3, and it is freely available for academic research.


Subject(s)
Emotions , Language , Arousal , Humans , Translating , Translations
5.
Behav Res Methods ; 54(6): 2843-2863, 2022 12.
Article in English | MEDLINE | ID: mdl-35112286

ABSTRACT

Scientific studies of language behavior need to grapple with a large diversity of languages in the world and, for reading, a further variability in writing systems. Yet, the ability to form meaningful theories of reading is contingent on the availability of cross-linguistic behavioral data. This paper offers new insights into aspects of reading behavior that are shared and those that vary systematically across languages through an investigation of eye-tracking data from 13 languages recorded during text reading. We begin with reporting a bibliometric analysis of eye-tracking studies showing that the current empirical base is insufficient for cross-linguistic comparisons. We respond to this empirical lacuna by presenting the Multilingual Eye-Movement Corpus (MECO), the product of an international multi-lab collaboration. We examine which behavioral indices differentiate between reading in written languages, and which measures are stable across languages. One of the findings is that readers of different languages vary considerably in their skipping rate (i.e., the likelihood of not fixating on a word even once) and that this variability is explained by cross-linguistic differences in word length distributions. In contrast, if readers do not skip a word, they tend to spend a similar average time viewing it. We outline the implications of these findings for theories of reading. We also describe prospective uses of the publicly available MECO data, and its further development plans.


Subject(s)
Reading , Humans
6.
J Eye Mov Res ; 14(1)2021 May 19.
Article in English | MEDLINE | ID: mdl-34122746

ABSTRACT

We report the results of an empirical study on gaze aversion during dyadic human-to-human conversation in an interview setting. To address various methodological challenges in assessing gaze-to-face contact, we followed an approach where the experiment was conducted twice, each time with a different set of interviewees. In one of them the interviewer's gaze was tracked with an eye tracker, and in the other the interviewee's gaze was tracked. The gaze sequences obtained in both experiments were analyzed and modeled as Discrete-Time Markov Chains. The results show that the interviewer made more frequent and longer gaze contacts compared to the interviewee. Also, the interviewer made mostly diagonal gaze aversions, whereas the interviewee made sideways aversions (left or right). We discuss the relevance of this research for Human-Robot Interaction, and discuss some future research problems.

7.
Comput Intell Neurosci ; 2021: 8842420, 2021.
Article in English | MEDLINE | ID: mdl-34054941

ABSTRACT

To mitigate dictionary attacks or similar undesirable automated attacks to information systems, developers mostly prefer using CAPTCHA challenges as Human Interactive Proofs (HIPs) to distinguish between human users and scripts. Appropriate use of CAPTCHA requires a setup that balances between robustness and usability during the design of a challenge. The previous research reveals that most usability studies have used accuracy and response time as measurement criteria for quantitative analysis. The present study aims at applying optical neuroimaging techniques for the analysis of CAPTCHA design. The functional Near-Infrared Spectroscopy technique was used to explore the hemodynamic responses in the prefrontal cortex elicited by CAPTCHA stimulus of varying types. The findings suggest that regions in the left and right dorsolateral and right dorsomedial prefrontal cortex respond to the degrees of line occlusion, rotation, and wave distortions present in a CAPTCHA. The systematic addition of the visual effects introduced nonlinear effects on the behavioral and prefrontal oxygenation measures, indicative of the emergence of Gestalt effects that might have influenced the perception of the overall CAPTCHA figure.


Subject(s)
Prefrontal Cortex , Spectroscopy, Near-Infrared , Humans , Neuroimaging
8.
Front Neurorobot ; 15: 598895, 2021.
Article in English | MEDLINE | ID: mdl-33746729

ABSTRACT

Gaze and language are major pillars in multimodal communication. Gaze is a non-verbal mechanism that conveys crucial social signals in face-to-face conversation. However, compared to language, gaze has been less studied as a communication modality. The purpose of the present study is 2-fold: (i) to investigate gaze direction (i.e., aversion and face gaze) and its relation to speech in a face-to-face interaction; and (ii) to propose a computational model for multimodal communication, which predicts gaze direction using high-level speech features. Twenty-eight pairs of participants participated in data collection. The experimental setting was a mock job interview. The eye movements were recorded for both participants. The speech data were annotated by ISO 24617-2 Standard for Dialogue Act Annotation, as well as manual tags based on previous social gaze studies. A comparative analysis was conducted by Convolutional Neural Network (CNN) models that employed specific architectures, namely, VGGNet and ResNet. The results showed that the frequency and the duration of gaze differ significantly depending on the role of participant. Moreover, the ResNet models achieve higher than 70% accuracy in predicting gaze direction.

9.
Q J Exp Psychol (Hove) ; 74(2): 377-397, 2021 Feb.
Article in English | MEDLINE | ID: mdl-32976053

ABSTRACT

Reading requires the assembly of cognitive processes across a wide spectrum from low-level visual perception to high-level discourse comprehension. One approach of unravelling the dynamics associated with these processes is to determine how eye movements are influenced by the characteristics of the text, in particular which features of the words within the perceptual span maximise the information intake due to foveal, spillover, parafoveal, and predictive processing. One way to test the generalisability of current proposals of such distributed processing is to examine them across different languages. For Turkish, an agglutinative language with a shallow orthography-phonology mapping, we replicate the well-known canonical main effects of frequency and predictability of the fixated word as well as effects of incoming saccade amplitude and fixation location within the word on single-fixation durations with data from 35 adults reading 120 nine-word sentences. Evidence for previously reported effects of the characteristics of neighbouring words and interactions was mixed. There was no evidence for the expected Turkish-specific morphological effect of the number of inflectional suffixes on single-fixation durations. To control for word-selection bias associated with single-fixation durations, we also tested effects on word skipping, single-fixation, and multiple-fixation cases with a base-line category logit model, assuming an increase of difficulty for an increase in the number of fixations. With this model, significant effects of word characteristics and number of inflectional suffixes of foveal word on probabilities of the number of fixations were observed, while the effects of the characteristics of neighbouring words and interactions were mixed.


Subject(s)
Eye Movements/physiology , Fixation, Ocular , Language , Reading , Adult , Humans , Saccades , Turkey
10.
Front Hum Neurosci ; 13: 375, 2019.
Article in English | MEDLINE | ID: mdl-31708760

ABSTRACT

Recent advances in neuroimaging technologies have rendered multimodal analysis of operators' cognitive processes in complex task settings and environments increasingly more practical. In this exploratory study, we utilized optical brain imaging and mobile eye tracking technologies to investigate the behavioral and neurophysiological differences among expert and novice operators while they operated a human-machine interface in normal and adverse conditions. In congruence with related work, we observed that experts tended to have lower prefrontal oxygenation and exhibit gaze patterns that are better aligned with the optimal task sequence with shorter fixation durations as compared to novices. These trends reached statistical significance only in the adverse condition where the operators were prompted with an unexpected error message. Comparisons between hemodynamic and gaze measures before and after the error message indicated that experts' neurophysiological response to the error involved a systematic increase in bilateral dorsolateral prefrontal cortex (dlPFC) activity accompanied with an increase in fixation durations, which suggests a shift in their attentional state, possibly from routine process execution to problem detection and resolution. The novices' response was not as strong as that of experts, including a slight increase only in the left dlPFC with a decreasing trend in fixation durations, which is indicative of visual search behavior for possible cues to make sense of the unanticipated situation. A linear discriminant analysis model capitalizing on the covariance structure among hemodynamic and eye movement measures could distinguish experts from novices with 91% accuracy. Despite the small sample size, the performance of the linear discriminant analysis combining eye fixation and dorsolateral oxygenation measures before and after an unexpected event suggests that multimodal approaches may be fruitful for distinguishing novice and expert performance in similar neuroergonomic applications in the field.

11.
Article in English | MEDLINE | ID: mdl-29914994

ABSTRACT

Based on research in physical anthropology, we argue that brightness marks the abstract category of gender, with light colours marking the female gender and dark colours marking the male gender. In a set of three experiments, we examine this hypothesis, first in a speeded gender classification experiment with male and female names presented in black and white. As expected, male names in black and female names in white are classified faster than the reverse gender-colour combinations. The second experiment relies on a gender classification task involving the disambiguation of very briefly appearing non-descript stimuli in the form of black and white 'blobs'. The former are classified predominantly as male and the latter as female names. Finally, the processes driving light and dark object choices for males and females are examined by tracking the number of fixations and their duration in an eye-tracking experiment. The results reveal that when choosing for a male target, participants look longer and make more fixations on dark objects, and the same for light objects when choosing for a female target. The implications of these findings, which repeatedly reveal the same data patterns across experiments with Dutch, Portuguese and Turkish samples for the abstract category of gender, are discussed. The discussion attempts to enlarge the subject beyond mainstream models of embodied grounding.This article is part of the theme issue 'Varieties of abstract concepts: development, use and representation in the brain'.


Subject(s)
Color , Concept Formation/physiology , Gender Identity , Reaction Time/physiology , Adult , Female , Humans , Male , Portugal , Turkey , Young Adult
12.
J Eye Mov Res ; 11(6)2018 Nov 12.
Article in English | MEDLINE | ID: mdl-33828712

ABSTRACT

The analysis of dynamic scenes has been a challenging domain in eye tracking research. This study presents a framework, named MAGiC, for analyzing gaze contact and gaze aversion in face-to-face communication. MAGiC provides an environment that is able to detect and track the conversation partner's face automatically, overlay gaze data on top of the face video, and incorporate speech by means of speech-act annotation. Specifically, MAGiC integrates eye tracking data for gaze, audio data for speech segmentation, and video data for face tracking. MAGiC is an open source framework and its usage is demonstrated via publicly available video content and wiki pages. We explored the capabilities of MAGiC through a pilot study and showed that it facilitates the analysis of dynamic gaze data by reducing the annotation effort and the time spent for manual analysis of video data.

13.
Cogn Process ; 16 Suppl 1: 115-9, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26224279

ABSTRACT

Haptic-audio interfaces allow haptic exploration of statistical line graphs accompanied by sound or speech, thus providing access to exploration by visually impaired people. Verbally assisted haptic graph exploration can be seen as a task-oriented collaborative activity between two partners, a haptic explorer and an observing assistant, who are disposed to individual preferences for using reference frames. The experimental findings reveal that haptic explorers' spatial reference frames are mostly induced by hand movements, leading to action perspective instead of conventionally left-to-right spatiotemporal perspective. Moreover, the communicational goal may result in a switch in perspective.


Subject(s)
Attention/physiology , Communication , Comprehension/physiology , Space Perception/physiology , Touch Perception/physiology , Touch , Adult , Female , Goals , Humans , Male , Verbal Behavior , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...