Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 14: 1132570, 2023.
Article in English | MEDLINE | ID: mdl-37829077

ABSTRACT

A fundamental objective in Auditory Sciences is to understand how people learn to generalize auditory category knowledge in new situations. How we generalize to novel scenarios speaks to the nature of acquired category representations and generalization mechanisms in handling perceptual variabilities and novelty. The dual learning system (DLS) framework proposes that auditory category learning involves an explicit, hypothesis-testing learning system, which is optimal for learning rule-based (RB) categories, and an implicit, procedural-based learning system, which is optimal for learning categories requiring pre-decisional information integration (II) across acoustic dimensions. Although DLS describes distinct mechanisms of two types of category learning, it is yet clear the nature of acquired representations and how we transfer them to new contexts. Here, we conducted three experiments to examine differences between II and RB category representations by examining what acoustic and perceptual novelties and variabilities affect learners' generalization success. Learners can successfully categorize different sets of untrained sounds after only eight blocks of training for both II and RB categories. The category structures and novel contexts differentially modulated the generalization success. The II learners significantly decreased generalization performances when categorizing new items derived from an untrained perceptual area and in a context with more distributed samples. In contrast, RB learners' generalizations are resistant to changes in perceptual regions but are sensitive to changes in sound dispersity. Representational similarity modeling revealed that the generalization in the more dispersed sampling context was accomplished differently by II and RB learners. II learners increased representations of perceptual similarity and decision distance to compensate for the decreased transfer of category representations, whereas the RB learners used a more computational cost strategy by default, computing the decision-bound distance to guide generalization decisions. These results suggest that distinct representations emerged after learning the two types of category structures and using different computations and flexible mechanisms in resolving generalization challenges when facing novel perceptual variability in new contexts. These findings provide new evidence for dissociated representations of auditory categories and reveal novel generalization mechanisms in resolving variabilities to maintain perceptual constancy.

2.
Biol Psychol ; 175: 108430, 2022 11.
Article in English | MEDLINE | ID: mdl-36181967

ABSTRACT

Face context effect refers to the influence of the emotional context on facial expression perception. Numerous empirical studies have explored the mechanisms of the face context effect, but no consistent conclusions have been drawn. Hence, we investigated the cognitive mechanisms of the face context effect using recordings of event-related potentials. In Experiment 1, we adopted the context-target paradigm to explore the mechanisms of the effect of context with different emotional valences on the neutral face perception (emotional valence-based face context effect). In Experiment 2, we explored the mechanisms of the effect of context with different emotional types on the neutral face perception (specific emotion-based face context effect). The results of Experiment 1 indicated that the participants were biased toward contextual valence when recognizing the emotional valence of neutral faces, and that the neutral target faces under emotional contexts with different valences induced significant differences in the P1, N170, and LPP amplitudes. In Experiment 2, the results of Experiment 2 indicated that the participants were biased toward specific contextual emotions when recognizing the emotion type of neutral faces, and that the neutral target faces under contexts of different emotional types induced significant differences in the LPP amplitude, but not in the P1 or N170 amplitude. We concluded that the emotional valence-based face context effect occurred in the early processing stage, whereas the specific emotion-based face context effect occurred in the late processing stage.


Subject(s)
Facial Recognition , Humans , Electroencephalography , Evoked Potentials , Emotions , Facial Expression , Cognition
SELECTION OF CITATIONS
SEARCH DETAIL
...