Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.523
Filtrar
1.
Addiction ; 2024 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-38923723

RESUMO

BACKGROUND AND AIMS: E-cigarette marketing exposure on social media influences perceptions; however, limited knowledge exists regarding marketing features attracting the most visual attention. This study examined visual attention to features of social media marketing for disposable e-cigarettes and related product perceptions. DESIGN, SETTING AND PARTICIPANTS: Participants viewed 32 disposable marketing post images from social media (Instagram) using computer-based eye-tracking technology to assess standardized attention metrics of marketing features. They then completed a survey assessing positive product perceptions. The study took place in New Jersey, USA, June-September 2022, comprising young adults (aged 18-29) who do not use tobacco (n = 72) or who smoke cigarettes (n = 42). MEASUREMENTS: We examined associations between 14 marketing features (e.g. product package, personal item, fruit/candy descriptor, social media account) and standardized attention metrics of dwell time (fixation duration) and entry time (time to first fixation). Then, we assessed attention metrics for each feature in relation to positive product perceptions (appeal and positive use expectancy). FINDINGS: Among all participants, dwell time was the longest for the product descriptor [marginal means (MM) = 1.77; 95% confidence interval (CI) = 1.69, 1.86], social media account (MM = 1.76; 95% CI = 1.67, 1.85) and fruit/candy descriptor features (MM = 1.56; 95% CI = 1.41, 1.70); entry time was the shortest for the social media account (MM = 0.35; 95% CI = 0.26, 0.46), personal item (MM = 0.36; 95% CI = 0.17, 0.56) and human model features (MM = 0.40; 95% CI = 0.08, 0.72). Two use status groups had comparable dwell and entry times, except for the product descriptor feature. Longer dwell time for the product package feature increased positive product perceptions among both use status groups [regression coefficient (ß) = 0.44 and 2.61]. Longer dwell time for fruit/candy descriptor (ß = 1.80) and price promotion features (ß = 4.04) increased positive product perceptions among those who smoke. CONCLUSIONS: US young adults appear to be particularly visually engaged by disposable e-cigarette marketing that uses social media account features (account profile pictures, information about the products marketed and relevant hashtags) and features enhancing the products' personal relatability. Disposable product packages, fruit/candy descriptors and price promotions may increase the influence of social media marketing among various use status groups.

2.
J Neuroeng Rehabil ; 21(1): 106, 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38909239

RESUMO

BACKGROUND: Falls are common in a range of clinical cohorts, where routine risk assessment often comprises subjective visual observation only. Typically, observational assessment involves evaluation of an individual's gait during scripted walking protocols within a lab to identify deficits that potentially increase fall risk, but subtle deficits may not be (readily) observable. Therefore, objective approaches (e.g., inertial measurement units, IMUs) are useful for quantifying high resolution gait characteristics, enabling more informed fall risk assessment by capturing subtle deficits. However, IMU-based gait instrumentation alone is limited, failing to consider participant behaviour and details within the environment (e.g., obstacles). Video-based eye-tracking glasses may provide additional insight to fall risk, clarifying how people traverse environments based on head and eye movements. Recording head and eye movements can provide insights into how the allocation of visual attention to environmental stimuli influences successful navigation around obstacles. Yet, manual review of video data to evaluate head and eye movements is time-consuming and subjective. An automated approach is needed but none currently exists. This paper proposes a deep learning-based object detection algorithm (VARFA) to instrument vision and video data during walks, complementing instrumented gait. METHOD: The approach automatically labels video data captured in a gait lab to assess visual attention and details of the environment. The proposed algorithm uses a YoloV8 model trained on with a novel lab-based dataset. RESULTS: VARFA achieved excellent evaluation metrics (0.93 mAP50), identifying, and localizing static objects (e.g., obstacles in the walking path) with an average accuracy of 93%. Similarly, a U-NET based track/path segmentation model achieved good metrics (IoU 0.82), suggesting that the predicted tracks (i.e., walking paths) align closely with the actual track, with an overlap of 82%. Notably, both models achieved these metrics while processing at real-time speeds, demonstrating efficiency and effectiveness for pragmatic applications. CONCLUSION: The instrumented approach improves the efficiency and accuracy of fall risk assessment by evaluating the visual allocation of attention (i.e., information about when and where a person is attending) during navigation, improving the breadth of instrumentation in this area. Use of VARFA to instrument vision could be used to better inform fall risk assessment by providing behaviour and context data to complement instrumented e.g., IMU data during gait tasks. That may have notable (e.g., personalized) rehabilitation implications across a wide range of clinical cohorts where poor gait and increased fall risk are common.


Assuntos
Acidentes por Quedas , Aprendizado Profundo , Caminhada , Acidentes por Quedas/prevenção & controle , Humanos , Medição de Risco/métodos , Caminhada/fisiologia , Masculino , Feminino , Adulto , Tecnologia de Rastreamento Ocular , Movimentos Oculares/fisiologia , Marcha/fisiologia , Gravação em Vídeo , Adulto Jovem
3.
Bioengineering (Basel) ; 11(6)2024 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-38927781

RESUMO

Automatically segmenting polyps from colonoscopy videos is crucial for developing computer-assisted diagnostic systems for colorectal cancer. Existing automatic polyp segmentation methods often struggle to fulfill the real-time demands of clinical applications due to their substantial parameter count and computational load, especially those based on Transformer architectures. To tackle these challenges, a novel lightweight long-range context fusion network, named LightCF-Net, is proposed in this paper. This network attempts to model long-range spatial dependencies while maintaining real-time performance, to better distinguish polyps from background noise and thus improve segmentation accuracy. A novel Fusion Attention Encoder (FAEncoder) is designed in the proposed network, which integrates Large Kernel Attention (LKA) and channel attention mechanisms to extract deep representational features of polyps and unearth long-range dependencies. Furthermore, a newly designed Visual Attention Mamba module (VAM) is added to the skip connections, modeling long-range context dependencies in the encoder-extracted features and reducing background noise interference through the attention mechanism. Finally, a Pyramid Split Attention module (PSA) is used in the bottleneck layer to extract richer multi-scale contextual features. The proposed method was thoroughly evaluated on four renowned polyp segmentation datasets: Kvasir-SEG, CVC-ClinicDB, BKAI-IGH, and ETIS. Experimental findings demonstrate that the proposed method delivers higher segmentation accuracy in less time, consistently outperforming the most advanced lightweight polyp segmentation networks.

4.
Cortex ; 177: 84-99, 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38848652

RESUMO

The visual system operates rhythmically, through timely coordinated perceptual and attentional processes, involving coexisting patterns in the alpha range (7-13 Hz) at ∼10 Hz, and theta (3-6 Hz) range, respectively. Here we aimed to disambiguate whether variations in task requirements, in terms of attentional demand and side of target presentation, might influence the occurrence of either perceptual or attentional components in behavioral visual performance, also uncovering possible differences in the sampling mechanisms of the two cerebral hemispheres. To this aim, visuospatial performance was densely sampled in two versions of a visual detection task where the side of target presentation was fixed (Task 1), with participants monitoring one single hemifield, or randomly varying across trials, with participants monitoring both hemifields simultaneously (Task 2). Performance was analyzed through spectral decomposition, to reveal behavioral oscillatory patterns. For Task 1, when attentional resources where focused on one hemifield only, the results revealed an oscillatory pattern fluctuating at ∼10 Hz and ∼6-9 Hz, for stimuli presented to the left and the right hemifield, respectively, possibly representing a perceptual sampling mechanism with different efficiency within the left and the right hemispheres. For Task 2, when attentional resources were simultaneously deployed to the two hemifields, a ∼5 Hz rhythm emerged both for stimuli presented to the left and the right, reflecting an attentional sampling process, equally supported by the two hemispheres. Overall, the results suggest that distinct perceptual and attentional sampling mechanisms operate at different oscillatory frequencies and their prevalence and hemispheric lateralization depends on task requirements.

5.
Perception ; : 3010066241252390, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38826086

RESUMO

The way that attention affects the processing of visual information is one of the most intriguing fields in the study of visual perception. One way to examine this interaction is by studying the way perceptual aftereffects are modulated by attention. In the present study, we have manipulated attention during adaptation to translational motion generated by coherently moving random dots, in order to investigate the effect of the distraction of attention on the strength of the peripheral dynamic motion aftereffect (MAE). A foveal rapid serial visual presentation task (RSVP) of varying difficulty was introduced during the adaptation period while the adaptation and test stimuli were presented peripherally. Furthermore, to examine the interaction between the physical characteristics of the stimulus and attention, we have manipulated the motion coherence level of the adaptation stimuli. Our results suggested that the removal of attention through an irrelevant task modulated the MAE's magnitude moderately and that such an effect depends on the stimulus strength. We also showed that the MAE still persists with subthreshold and unattended stimuli, suggesting that perhaps attention is not required for the complete development of the MAE.

6.
Front Psychol ; 15: 1376664, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38831943

RESUMO

We investigated the role of alpha in the suppression of attention capture by salient but to-be-suppressed (negative and nonpredictive) color cues, expecting a potential boosting effect of alpha-rhythmic entrainment on feature-specific cue suppression. We did so by presenting a rhythmically flickering visual bar of 10 Hz before the cue - either on the cue's side or opposite the cue -while an arrhythmically flickering visual bar was presented on the respective other side. We hypothesized that rhythmic entrainment at cue location could enhance the suppression of the cue. Testing 27 participants ranging from 18 to 39 years of age, we found both behavioral and electrophysiological evidence of suppression: Search times for a target at a negatively cued location were delayed relative to a target away from the cued location (inverse validity effects). In addition, an event-related potential indicative for suppression (the Distractor Positivity, Pd) was observed following rhythmic but not arrhythmic stimulation, indicating that suppression was boosted by the stimulation. This was also echoed in higher spectral power and intertrial phase coherence of EEG at rhythmically versus arrhythmically stimulated electrode sites, albeit only at the second harmonic (20 Hz), but not at the stimulation frequency. In addition, inverse validity effects were not modulated by rhythmic entrainment congruent with the cue side. Hence, we propose that rhythmic visual stimulation in the alpha range could support suppression, though behavioral evidence remains elusive, in contrast to electrophysiological findings.

7.
Brain Behav ; 14(6): e3567, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38841742

RESUMO

BACKGROUND: Visual attention-related processes that underlie visual search behavior are impaired in both the early stages of Alzheimer's disease (AD) and amnestic mild cognitive impairment (aMCI), which is considered a risk factor for AD. Although traditional computer-based array tasks have been used to investigate visual search, information on the visual search patterns of AD and MCI patients in real-world environments is limited. AIM: The objective of this study was to evaluate the differences in visual search behaviors among individuals with AD, aMCI, and healthy controls (HCs) in real-world scenes. MATERIALS AND METHODS: A total of 92 participants were enrolled, including 28 with AD, 32 with aMCI, and 32 HCs. During the visual search task, participants were instructed to look at a single target object amid distractors, and their eye movements were recorded. RESULTS: The results indicate that patients with AD made more fixations on distractors and fewer fixations on the target, compared to patients with aMCI and HC groups. Additionally, AD patients had longer fixation durations on distractors and spent less time looking at the target than both patients with aMCI and HCs. DISCUSSION: These findings suggest that visual search behavior is impaired in patients with AD and can be distinguished from aMCI and healthy individuals. For future studies, it is important to longitudinally monitor visual search behavior in the progression from aMCI to AD. CONCLUSION: Our study holds significance in elucidating the interplay between impairments in attention, visual processes, and other underlying cognitive processes, which contribute to the functional decline observed in individuals with AD and aMCI.


Assuntos
Doença de Alzheimer , Atenção , Disfunção Cognitiva , Percepção Visual , Humanos , Doença de Alzheimer/fisiopatologia , Disfunção Cognitiva/fisiopatologia , Feminino , Masculino , Idoso , Atenção/fisiologia , Percepção Visual/fisiologia , Amnésia/fisiopatologia , Movimentos Oculares/fisiologia , Idoso de 80 Anos ou mais , Pessoa de Meia-Idade
8.
J Neurophysiol ; 132(1): 162-176, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38836298

RESUMO

The pupillary light response was long considered a brainstem reflex, outside of cognitive influence. However, newer findings indicate that pupil dilation (and eye movements) can reflect content held "in mind" with working memory (WM). These findings may reshape understanding of ocular and WM mechanisms, but it is unclear whether the signals are artifactual or functional to WM. Here, we ask whether peripheral and oculomotor WM signals are sensitive to the task-relevance or "attentional state" of WM content. During eye-tracking, human participants saw both dark and bright WM stimuli, then were retroactively cued to the item that would most likely be tested. Critically, we manipulated the attentional priority among items by varying the cue reliability across blocks. We confirmed previous findings that remembering darker items is associated with larger pupils (vs. brighter), and that gaze is biased toward cued item locations. Moreover, we discovered that pupil and eye movement responses were influenced differently by WM item relevance. Feature-specific pupillary effects emerged only for highly prioritized WM items but were eliminated when cues were less reliable, and pupil effects also increased with self-reported visual imagery strength. Conversely, gaze position consistently veered toward the cued item location, regardless of cue reliability. However, biased microsaccades occurred at a higher frequency when cues were more reliable, though only during a limited post-cue time window. Therefore, peripheral sensorimotor processing is sensitive to the task-relevance or functional state of internal WM content, but pupillary and eye movement WM signals show distinct profiles. These results highlight a potential role for early visual processing in maintaining multiple WM content dimensions.NEW & NOTEWORTHY Here, we found that working memory (WM)-driven ocular inflections-feature-specific pupillary and saccadic biases-were muted for memory items that were less behaviorally relevant. This work illustrates that functionally informative goal signals may extend as early as the sensorimotor periphery, that pupil size may be under more fine-grained control than originally thought, and that ocular signals carry multiple dimensions of cognitively relevant information.


Assuntos
Atenção , Sinais (Psicologia) , Movimentos Oculares , Imaginação , Memória de Curto Prazo , Pupila , Humanos , Memória de Curto Prazo/fisiologia , Feminino , Masculino , Adulto , Pupila/fisiologia , Adulto Jovem , Atenção/fisiologia , Imaginação/fisiologia , Movimentos Oculares/fisiologia , Tecnologia de Rastreamento Ocular , Percepção Visual/fisiologia
9.
Front Psychol ; 15: 1376552, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38873529

RESUMO

Caregiver-infant interactions shape infants' early visual experience; however, there is limited work from low-and middle-income countries (LMIC) in characterizing the visual cognitive dynamics of these interactions. Here, we present an innovative dyadic visual cognition pipeline using machine learning methods which captures, processes, and analyses the visual dynamics of caregiver-infant interactions across cultures. We undertake two studies to examine its application in both low (rural India) and high (urban UK) resource settings. Study 1 develops and validates the pipeline to process caregiver-infant interaction data captured using head-mounted cameras and eye-trackers. We use face detection and object recognition networks and validate these tools using 12 caregiver-infant dyads (4 dyads from a 6-month-old UK cohort, 4 dyads from a 6-month-old India cohort, and 4 dyads from a 9-month-old India cohort). Results show robust and accurate face and toy detection, as well as a high percent agreement between processed and manually coded dyadic interactions. Study 2 applied the pipeline to a larger data set (25 6-month-olds from the UK, 31 6-month-olds from India, and 37 9-month-olds from India) with the aim of comparing the visual dynamics of caregiver-infant interaction across the two cultural settings. Results show remarkable correspondence between key measures of visual exploration across cultures, including longer mean look durations during infant-led joint attention episodes. In addition, we found several differences across cultures. Most notably, infants in the UK had a higher proportion of infant-led joint attention episodes consistent with a child-centered view of parenting common in western middle-class families. In summary, the pipeline we report provides an objective assessment tool to quantify the visual dynamics of caregiver-infant interaction across high- and low-resource settings.

10.
Artigo em Inglês | MEDLINE | ID: mdl-38839713

RESUMO

Attention must be carefully controlled to avoid distraction by salient stimuli. The signal suppression hypothesis proposes that salient stimuli can be proactively suppressed to prevent distraction. Although this hypothesis has garnered much support, most previous studies have used one class of salient distractors: color singletons. It therefore remains unclear whether other kinds of salient distractors can also be suppressed. The current study directly compared suppression of a variety of salient stimuli using an attentional capture task that was adapted for eye tracking. The working hypothesis was that static salient stimuli (e.g., color singletons) would be easier to suppress than dynamic salient stimuli (e.g., motion singletons). The results showed that participants could ignore a wide variety of salient distractors. Importantly, suppression was weaker and slower to develop for dynamic salient stimuli than static salient stimuli. A final experiment revealed that adding a static salient feature to a dynamic motion distractor greatly improved suppression. Altogether, the results suggest that an underlying inhibitory process is applied to all kinds of salient distractors, but that suppression is more readily applied to static features than dynamic features.

11.
J Gen Psychol ; : 1-22, 2024 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-38733318

RESUMO

A considerable amount of research has revealed that there exists an evolutionary mismatch between ancestral environments and conditions following the rise of agriculture regarding the contact between humans and animal reservoirs of infectious diseases. Based on this evolutionary mismatch framework, we examined whether visual attention exhibits adaptive attunement toward animal targets' pathogenicity. Consistent with our predictions, faces bearing heuristic infection cues held attention to a greater extent than did animal vectors of zoonotic infectious diseases. Moreover, the results indicated that attention showed a specialized vigilance toward processing facial cues connoting the presence of infectious diseases, whereas it was allocated comparably between animal disease vectors and disease-irrelevant animals. On the other hand, the pathogen salience manipulation employed to amplify the participants' contextual-level anti-pathogen motives did not moderate the selective allocation of attentional resources. The fact that visual attention seems poorly equipped to detect and encode animals' zoonotic transmission risk supports the idea that our evolved disease avoidance mechanisms might have limited effectiveness in combating global outbreaks originating from zoonotic emerging infectious diseases.

12.
Behav Sci (Basel) ; 14(5)2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38785866

RESUMO

As a booming branch of online retailing, live-streaming e-commerce can present abundant information dimensions and diverse forms of expression. Live-streaming e-commerce has enabled online retailers to interact with customers face-to-face, resulting in widespread instances of emotional and impulse buying behavior. Prior research in live-streaming e-commerce has suggested that live streamers' characteristics, especially the live streamer's face, can affect customers' purchase intentions. The present research used questionnaire surveys and an eye tracking experiment to investigate the impact of live streamer's facial attractiveness on consumer purchase intention for search-based and experience-based products. The questionnaire survey analyzed 309 valid questionnaires and revealed that attractive faces are the key influencing factor driving consumers' impulse purchase intentions. Moreover, consumers' emotional experience plays a partial mediating role in the process of live streamers' faces influencing purchase intention. The eye tracking experiment further explored the mechanism of a live streamer's facial attractiveness on consumers' purchase intentions of search-based products and experience-based products from the perspective of visual attention by analyzing 64 valid sets of data. The results showed that attractive faces attract more consumers' attention and, therefore, increase their purchase intention. Furthermore, there is a significant interaction between product type, the live streamer's facial attractiveness, and consumers' purchase intentions. In the case of unattractive live streamers, consumers are more likely to buy search-based products than experience-based products, while the purchase intention does not vary between search-based products and experience-based products in the case of attractive live streamers. The present study provides evidence for 'beauty premium' in live-streaming e-commerce and sheds light on the design of the match between live streamers and different types of products.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38811488

RESUMO

Visual search can be guided by biasing one's attention towards features associated with a target. Prior work has shown that high-fidelity, picture-based cues are more beneficial to search than text-based cues. However, typically picture cues provide both detailed form information and color information that is absent from text-based cues. Given that visual resolution deteriorates with eccentricity, it is not clear that high-fidelity form information would benefit guidance to peripheral objects - much of the picture benefit could be due to color information alone. To address this, we conducted a search task with eye-tracking that had four types of cues that comprised a 2 (text/pictorial cue) × 2 (no color/color) design. We hypothesized that color information would be important for efficient search guidance while high-fidelity form information would be important for efficient verification times. In Experiment 1 cues were a colored picture of the target, a gray-scaled picture of the target, a text-based cue that included color (e.g., "blue shoe"), or a text-based cue without color (e.g., "shoe"). Experiment 2 was a replication of Experiment 1, except that the color word in the text-based cue was presented in the precise color that was the dominant color in the target. Our results show that high-fidelity form information is important for efficient verifications times (with color playing less of a role) and color is important for efficient guidance, but form information also benefits guidance. These results suggest that different features of the cue independently contribute to different aspects of the search process.

14.
Vision Res ; 221: 108424, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38744033

RESUMO

Visual attention is typically shifted toward the targets of upcoming saccadic eye movements. This observation is commonly interpreted in terms of an obligatory coupling between attentional selection and oculomotor programming. Here, we investigated whether this coupling is facilitated by a habitual expectation of spatial congruence between visual and motor targets. To this end, we conducted a dual-task (i.e., concurrent saccade task and visual discrimination task) experiment in which male and female participants were trained to either anticipate spatial congruence or incongruence between a saccade target and an attention probe stimulus. To assess training-induced effects of expectation on premotor attention allocation, participants subsequently completed a test phase in which the attention probe position was randomized. Results revealed that discrimination performance was systematically biased toward the expected attention probe position, irrespective of whether this position matched the saccade target or not. Overall, our findings demonstrate that visual attention can be substantially decoupled from ongoing oculomotor programming and suggest an important role of habitual expectations in the attention-action coupling.


Assuntos
Atenção , Movimentos Sacádicos , Percepção Visual , Humanos , Movimentos Sacádicos/fisiologia , Atenção/fisiologia , Masculino , Feminino , Adulto Jovem , Adulto , Percepção Visual/fisiologia , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Análise de Variância
15.
Elife ; 122024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38747572

RESUMO

Working memory enables us to bridge past sensory information to upcoming future behaviour. Accordingly, by its very nature, working memory is concerned with two components: the past and the future. Yet, in conventional laboratory tasks, these two components are often conflated, such as when sensory information in working memory is encoded and tested at the same location. We developed a task in which we dissociated the past (encoded location) and future (to-be-tested location) attributes of visual contents in working memory. This enabled us to independently track the utilisation of past and future memory attributes through gaze, as observed during mnemonic selection. Our results reveal the joint consideration of past and future locations. This was prevalent even at the single-trial level of individual saccades that were jointly biased to the past and future. This uncovers the rich nature of working memory representations, whereby both past and future memory attributes are retained and can be accessed together when memory contents become relevant for behaviour.


Assuntos
Memória de Curto Prazo , Percepção Visual , Memória de Curto Prazo/fisiologia , Humanos , Masculino , Percepção Visual/fisiologia , Feminino , Adulto , Adulto Jovem , Movimentos Sacádicos/fisiologia
16.
Psychon Bull Rev ; 2024 May 28.
Artigo em Inglês | MEDLINE | ID: mdl-38806789

RESUMO

When processing visual scenes, we tend to prioritize information in the foreground, often at the expense of background information. The foreground bias has been supported by data demonstrating that there are more fixations to foreground, and faster and more accurate detection of targets embedded in foreground. However, it is also known that semantic consistency is associated with more efficient search. Here, we examined whether semantic context interacts with foreground prioritization, either amplifying or mitigating the effect of target semantic consistency. For each scene, targets were placed in the foreground or background and were either semantically consistent or inconsistent with the context of immediately surrounding depth region. Results indicated faster response times (RTs) for foreground and semantically consistent targets, replicating established effects. More importantly, we found the magnitude of the semantic consistency effect was significantly smaller in the foreground than background region. To examine the robustness of this effect, in Experiment 2, we strengthened the reliability of semantics by increasing the proportion of targets consistent with the scene region to 80%. We found the overall results pattern to replicate the incongruous effect of semantic consistency across depth observed in Experiment 1. This suggests foreground bias modulates the effects of semantics so that performance is less impacted in near space.

17.
Front Psychol ; 15: 1257324, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38562240

RESUMO

Attention-sensitive signalling is the pragmatic skill of signallers who adjust the modality of their communicative signals to their recipient's attention state. This study provides the first comprehensive evidence for its onset and development in 7-to 20-month-olds human infants, and underlines its significance for language acquisition and evolutionary history. Mother-infant dyads (N = 30) were studied in naturalistic settings, sampled according to three developmental periods (in months); [7-10], [11-14], and [15-20]. Infant's signals were classified by dominant perceptible sensory modality and proportions compared according to their mother's visual attention, infant-directed speech and tactile contact. Maternal visual attention and infant-directed speech were influential on the onset and steepness of infants' communicative adjustments. The ability to inhibit silent-visual signals towards visually inattentive mothers (unimodal adjustment) predated the ability to deploy audible-or-contact signals in this case (cross-modal adjustment). Maternal scaffolding of infant's early pragmatic skills through her infant-directed speech operates on the facilitation of infant's unimodal adjustment, the preference for oral over gestural signals, and the audio-visual combinations of signals. Additionally, breakdowns in maternal visual attention are associated with increased use of the audible-oral modality/channel. The evolutionary role of the sharing of attentional resources between parents and infants into the emergence of modern language is discussed.

18.
MethodsX ; 12: 102662, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38577409

RESUMO

This article provides a step-by-step guideline for measuring and analyzing visual attention in 3D virtual reality (VR) environments based on eye-tracking data. We propose a solution to the challenges of obtaining relevant eye-tracking information in a dynamic 3D virtual environment and calculating interpretable indicators of learning and social behavior. With a method called "gaze-ray casting," we simulated 3D-gaze movements to obtain information about the gazed objects. This information was used to create graphical models of visual attention, establishing attention networks. These networks represented participants' gaze transitions between different entities in the VR environment over time. Measures of centrality, distribution, and interconnectedness of the networks were calculated to describe the network structure. The measures, derived from graph theory, allowed for statistical inference testing and the interpretation of participants' visual attention in 3D VR environments. Our method provides useful insights when analyzing students' learning in a VR classroom, as reported in a corresponding evaluation article with N = 274 participants. •Guidelines on implementing gaze-ray casting in VR using the Unreal Engine and the HTC VIVE Pro Eye.•Creating gaze-based attention networks and analyzing their network structure.•Implementation tutorials and the Open Source software code are provided via OSF: https://osf.io/pxjrc/?view_only=1b6da45eb93e4f9eb7a138697b941198.

19.
Cognition ; 247: 105788, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38579638

RESUMO

In real-world vision, people prioritise the most informative scene regions via eye-movements. According to the cognitive guidance theory of visual attention, viewers allocate visual attention to those parts of the scene that are expected to be the most informative. The expected information of a scene region is coded in the semantic distribution of that scene. Meaning maps have been proposed to capture the spatial distribution of local scene semantics in order to test cognitive guidance theories of attention. Notwithstanding the success of meaning maps, the reason for their success has been contested. This has led to at least two possible explanations for the success of meaning maps in predicting visual attention. On the one hand, meaning maps might measure scene semantics. On the other hand, meaning maps might measure scene features, overlapping with, but distinct from, scene semantics. This study aims to disentangle these two sources of information by considering both conceptual information and non-semantic scene entropy simultaneously. We found that both semantic and non-semantic information is captured by meaning maps, but scene entropy accounted for more unique variance in the success of meaning maps than conceptual information. Additionally, some explained variance was unaccounted for by either source of information. Thus, although meaning maps may index some aspect of semantic information, their success seems to be better explained by non-semantic information. We conclude that meaning maps may not yet be a good tool to test cognitive guidance theories of attention in general, since they capture non-semantic aspects of local semantic density and only a small portion of conceptual information. Rather, we suggest that researchers should better define the exact aspect of cognitive guidance theories they wish to test and then use the tool that best captures that desired semantic information. As it stands, the semantic information contained in meaning maps seems too ambiguous to draw strong conclusions about how and when semantic information guides visual attention.

20.
Vision (Basel) ; 8(2)2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38651444

RESUMO

The effects of social (eye gaze, pointing gestures) and symbolic (arrows) cues on observers' attention are often studied by presenting such cues in isolation and at fixation. Here, we extend this work by embedding cues in natural scenes. Participants were presented with a single cue (Experiment 1) or a combination of cues (Experiment 2) embedded in natural scenes and were asked to 'simply look at the images' while their eye movements were recorded to assess the effects of the cues on (overt) attention. Single-gaze and pointing cues were fixated for longer than arrows but at the cost of shorter dwell times on the cued object. When presented together, gaze and pointing cues were fixated faster and for longer than simultaneously presented arrows. Attention to the cued object depended on the combination of cues and whether both cues were directed towards or away from the target object. Together, the findings confirm earlier observations that people attract attention more strongly than arrows but that arrows more strongly direct attention.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...