Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Nat Neurosci ; 26(4): 664-672, 2023 04.
Artigo em Inglês | MEDLINE | ID: mdl-36928634

RESUMO

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.


Assuntos
Córtex Auditivo , Semântica , Humanos , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Acústica , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos
2.
Cereb Cortex ; 33(7): 3621-3635, 2023 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-36045002

RESUMO

Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.


Assuntos
Mãos , Córtex Somatossensorial , Animais , Córtex Somatossensorial/diagnóstico por imagem , Córtex Somatossensorial/fisiologia , Tato/fisiologia , Neurônios/fisiologia , Imageamento por Ressonância Magnética , Mapeamento Encefálico
3.
Front Psychol ; 13: 964209, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36312201

RESUMO

Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems.

4.
Front Neurosci ; 16: 921489, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36148146

RESUMO

We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical novices were obtained during audiovisual presentation of a 116 s performance of the allegro of String Quartet, No. 14 in D minor by Schubert played by the 'Quartetto di Cremona.' These data were analyzed using intersubject correlation (ISC). Results showed extensive ISC in auditory and visual areas as well as parietal cortex, frontal cortex and subcortical areas including the medial geniculate and basal ganglia (putamen). These results from a single fixed viewpoint of multiple musicians are greater than previous reports of ISC from unstructured group activity but are broadly consistent with related research that used ISC to explore listening to music or watching solo dance. A feature analysis examining the relationship between brain activity and physical features of the auditory and visual signals yielded findings of a large proportion of activity related to auditory and visual processing, particularly in the superior temporal gyrus (STG) as well as midbrain areas. Motor areas were also involved, potentially as a result of watching motion from the stick figure display of musicians in the string quartet. These results reveal involvement of areas such as the putamen in processing complex musical performance and highlight the potential of using brief naturalistic stimuli to localize distinct brain areas and elucidate potential mechanisms underlying multisensory integration.

5.
Neuroimage ; 258: 119347, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35660460

RESUMO

The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.


Assuntos
Mapeamento Encefálico , Encéfalo , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Cognição , Humanos , Neuroimagem/métodos , Reprodutibilidade dos Testes
6.
Curr Biol ; 31(21): 4839-4844.e4, 2021 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-34506729

RESUMO

How the evolution of speech has transformed the human auditory cortex compared to other primates remains largely unknown. While primary auditory cortex is organized largely similarly in humans and macaques,1 the picture is much less clear at higher levels of the anterior auditory pathway,2 particularly regarding the processing of conspecific vocalizations (CVs). A "voice region" similar to the human voice-selective areas3,4 has been identified in the macaque right anterior temporal lobe with functional MRI;5 however, its anatomical localization, seemingly inconsistent with that of the human temporal voice areas (TVAs), has suggested a "repositioning of the voice area" in recent human evolution.6 Here we report a functional homology in the cerebral processing of vocalizations by macaques and humans, using comparative fMRI and a condition-rich auditory stimulation paradigm. We find that the anterior temporal lobe of both species possesses cortical voice areas that are bilateral and not only prefer conspecific vocalizations but also implement a representational geometry categorizing them apart from all other sounds in a species-specific but homologous manner. These results reveal a more similar functional organization of higher-level auditory cortex in macaques and humans than currently known.


Assuntos
Córtex Auditivo , Estimulação Acústica , Animais , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Humanos , Macaca , Imageamento por Ressonância Magnética , Primatas , Vocalização Animal/fisiologia
7.
Nat Hum Behav ; 5(9): 1203-1213, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33707658

RESUMO

Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic-temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions.


Assuntos
Nível de Alerta/fisiologia , Emoções/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Ira , Humanos , Voz/fisiologia
8.
Cognition ; 200: 104249, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32413547

RESUMO

Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations.


Assuntos
Percepção Social , Voz , Ira , Emoções , Expressão Facial , Humanos
9.
Elife ; 62017 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-28590903

RESUMO

Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.


Assuntos
Percepção Auditiva , Lobo Frontal/fisiologia , Percepção da Fala , Lobo Temporal/fisiologia , Percepção Visual , Adolescente , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
10.
IEEE Trans Haptics ; 10(1): 113-122, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-27390182

RESUMO

An experiment was conducted to study the effects of force produced by active touch on vibrotactile perceptual thresholds. The task consisted in pressing the fingertip against a flat rigid surface that provided either sinusoidal or broadband vibration. Three force levels were considered, ranging from light touch to hard press. Finger contact areas were measured during the experiment, showing positive correlation with the respective applied forces. Significant effects on thresholds were found for vibration type and force level. Moreover, possibly due to the concurrent effect of large (unconstrained) finger contact areas, active pressing forces, and long duration stimuli, the measured perceptual thresholds are considerably lower than what previously reported in the literature.


Assuntos
Dedos/fisiologia , Limiar Sensorial/fisiologia , Tato/fisiologia , Humanos , Fenômenos Mecânicos , Vibração
11.
Hum Brain Mapp ; 38(3): 1541-1573, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-27860095

RESUMO

We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.


Assuntos
Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Teoria da Informação , Neuroimagem/métodos , Distribuição Normal , Simulação por Computador , Eletroencefalografia , Entropia , Humanos , Sensibilidade e Especificidade
12.
Exp Brain Res ; 234(4): 1145-58, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26790425

RESUMO

Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Movimento/fisiologia , Punho/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Estimulação Física/métodos , Adulto Jovem
13.
J Acoust Soc Am ; 138(1): 457-66, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26233044

RESUMO

Dynamic information in acoustical signals produced by bouncing objects is often used by listeners to predict the objects' future behavior (e.g., hitting a ball). This study examined factors that affect the accuracy of motor responses to sounds of real-world dynamic events. In experiment 1, listeners heard 2-5 bounces from a tennis ball, ping-pong, basketball, or wiffle ball, and would tap to indicate the time of the next bounce in a series. Across ball types and number of bounces, listeners were extremely accurate in predicting the correct bounce time (CT) with a mean prediction error of only 2.58% of the CT. Prediction based on a physical model of bouncing events indicated that listeners relied primarily on temporal cues when estimating the timing of the next bounce, and to a lesser extent on the loudness and spectral cues. In experiment 2, the timing of each bounce pattern was altered to correspond to the bounce timing pattern of another ball, producing stimuli with contradictory acoustic cues. Nevertheless, listeners remained highly accurate in their estimates of bounce timing. This suggests that listeners can adopt their estimates of bouncing-object timing based on acoustic cues that provide most veridical information about dynamic aspects of object behavior.


Assuntos
Antecipação Psicológica/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Som , Percepção do Tempo/fisiologia , Estimulação Acústica , Acústica , Adolescente , Sinais (Psicologia) , Feminino , Humanos , Masculino , Modelos Psicológicos , Movimento (Física) , Localização de Som , Equipamentos Esportivos , Fatores de Tempo , Adulto Jovem
14.
Proc Natl Acad Sci U S A ; 111(38): 13795-8, 2014 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-25201950

RESUMO

The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.


Assuntos
Compreensão/fisiologia , Idioma , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino
15.
Front Neurosci ; 8: 228, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25126055

RESUMO

There is not only evidence for behavioral differences in voice perception between female and male listeners, but also recent suggestions for differences in neural correlates between genders. The fMRI functional voice localizer (comprising a univariate analysis contrasting stimulation with vocal vs. non-vocal sounds) is known to give robust estimates of the temporal voice areas (TVAs). However, there is growing interest in employing multivariate analysis approaches to fMRI data (e.g., multivariate pattern analysis; MVPA). The aim of the current study was to localize voice-related areas in both female and male listeners and to investigate whether brain maps may differ depending on the gender of the listener. After a univariate analysis, a random effects analysis was performed on female (n = 149) and male (n = 123) listeners and contrasts between them were computed. In addition, MVPA with a whole-brain searchlight approach was implemented and classification maps were entered into a second-level permutation based random effects models using statistical non-parametric mapping (SnPM; Nichols and Holmes, 2002). Gender differences were found only in the MVPA. Identified regions were located in the middle part of the middle temporal gyrus (bilateral) and the middle superior temporal gyrus (right hemisphere). Our results suggest differences in classifier performance between genders in response to the voice localizer with higher classification accuracy from local BOLD signal patterns in several temporal-lobe regions in female listeners.

16.
Cortex ; 58: 170-85, 2014 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25038309

RESUMO

Identifying sound sources is fundamental to developing a stable representation of the environment in the face of variable auditory information. The cortical processes underlying this ability have received little attention. In two fMRI experiments, we investigated passive adaptation to (Exp. 1) and explicit discrimination of (Exp. 2) source identities for different categories of auditory objects (voices, musical instruments, environmental sounds). All cortical effects of source identity were independent of high-level category information, and were accounted for by sound-to-sound differences in low-level structure (e.g., loudness). A conjunction analysis revealed that the left posterior middle frontal gyrus (pMFG) adapted to identity repetitions during both passive listening and active discrimination tasks. These results indicate that the comparison of sound source identities in a stream of auditory stimulation recruits the pMFG in a domain-general way, i.e., independent of the sound category, based on information contained in the low-level acoustical structure. pMFG recruitment during both passive listening and explicit identity comparison tasks also suggests its automatic engagement in sound source identity processing.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Lobo Frontal/fisiologia , Localização de Som/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico/métodos , Feminino , Neuroimagem Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
17.
PLoS One ; 9(12): e115587, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25551392

RESUMO

Several studies have investigated the encoding and perception of emotional expressivity in music performance. A relevant question concerns how the ability to communicate emotions in music performance is acquired. In accordance with recent theories on the embodiment of emotion, we suggest here that both the expression and recognition of emotion in music might at least in part rely on knowledge about the sounds of expressive body movements. We test this hypothesis by drawing parallels between musical expression of emotions and expression of emotions in sounds associated with a non-musical motor activity: walking. In a combined production-perception design, two experiments were conducted, and expressive acoustical features were compared across modalities. An initial performance experiment tested for similar feature use in walking sounds and music performance, and revealed that strong similarities exist. Features related to sound intensity, tempo and tempo regularity were identified as been used similarly in both domains. Participants in a subsequent perception experiment were able to recognize both non-emotional and emotional properties of the sound-generating walkers. An analysis of the acoustical correlates of behavioral data revealed that variations in sound intensity, tempo, and tempo regularity were likely used to recognize expressed emotions. Taken together, these results lend support the motor origin hypothesis for the musical expression of emotions.


Assuntos
Percepção Auditiva , Emoções , Atividade Motora , Música , Caminhada/psicologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
18.
Top Cogn Sci ; 5(2): 354-66, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23495123

RESUMO

We assessed the automaticity of spatial-numerical and spatial-musical associations by testing their intentionality and load sensitivity in a dual-task paradigm. In separate sessions, 16 healthy adults performed magnitude and pitch comparisons on sung numbers with variable pitch. Stimuli and response alternatives were identical, but the relevant stimulus attribute (pitch or number) differed between tasks. Concomitant tasks required retention of either color or location information. Results show that spatial associations of both magnitude and pitch are load sensitive and that the spatial association for pitch is more powerful than that for magnitude. These findings argue against the automaticity of spatial mappings in either stimulus dimension.


Assuntos
Cognição/fisiologia , Conceitos Matemáticos , Percepção da Altura Sonora/fisiologia , Canto/fisiologia , Percepção Espacial/fisiologia , Adulto , Feminino , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
19.
Cereb Cortex ; 23(9): 2025-37, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22802575

RESUMO

The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Lobo Temporal/fisiologia , Adulto Jovem
20.
J Acoust Soc Am ; 132(6): 4002-12, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23231129

RESUMO

The overall goal of the research presented here is to better understand how players evaluate violins within the wider context of finding relationships between measurable vibrational properties of instruments and their perceived qualities. In this study, the reliability of skilled musicians to evaluate the qualities of a violin was examined. In a first experiment, violinists were allowed to freely play a set of different violins and were then asked to rank the instruments by preference. Results showed that players were self-consistent, but a large amount of inter-individual variability was present. A second experiment was then conducted to investigate the origin of inter-individual differences in the preference for violins and to measure the extent to which different attributes of the instrument influence preference. Again, results showed large inter-individual variations in the preference for violins, as well as in assessing various characteristics of the instruments. Despite the significant lack of agreement in preference and the variability in how different criteria are evaluated between individuals, violin players tend to agree on the relevance of sound "richness" and, to a lesser extent, "dynamic range" for determining preference.


Assuntos
Percepção Auditiva , Julgamento , Música , Estimulação Acústica , Acústica , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Som , Vibração , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...