Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Psychol ; 14: 1148610, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37205072

RESUMO

Listening causes great difficulties for EFL learners and little is known concerning the contribution of EFL learners' metacognitive awareness to their listening performance and to their mastery of listening subskill. In the present study, the Metacognitive Awareness Listening Questionnaire (MALQ) and an in-house listening test were used to collect data from 567 Chinese EFL college students. The G-DINA package in R was adopted to identify students' mastery patterns of listening subskills. The correlations of test takers' MALQ results and their listening scores and listening subskills mastery probability were analyzed, respectively, to investigate how test participants' metacognitive awareness relates to their language proficiency and listening subskills. According to the study, learners' metacognitive awareness has a significant positive relationship with their listening performance at overall and subskills levels. The findings of the study provide additional evidence for using the MALQ as an instrument to interpret learners' metacognitive awareness of listening strategies. It is thus recommended that theorists and language teachers involve metacognitive awareness of strategies in listening instructions.

2.
Neuroimage ; 251: 118979, 2022 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-35143977

RESUMO

Human language is generally combinatorial: Words are combined into sentences to flexibly convey meaning. How the brain represents sentences, however, remains debated. Recently, it has been shown that delta-band cortical activity correlates with the sentential structure of speech. It remains debated, however, whether delta-band cortical tracking of sentences truly reflects mental representations of sentences or is caused by neural encoding of semantic properties of individual words. The current study investigates whether delta-band neural tracking of speech can be explained by semantic properties of individual words. Cortical activity is recorded using electroencephalography (EEG) when participants listen to sentences repeating at 1 Hz and word lists. The semantic properties of individual words, simulated using a word2vec model, predict a stronger 1 Hz response to word lists than to sentences. When listeners perform a word-monitoring task that does not require sentential processing, the 1 Hz response to word lists, however, is much weaker than the 1 Hz response to sentences, contradicting the prediction of the lexical semantics model. When listeners are explicitly asked to parse word lists into multi-word chunks, however, cortical activity can reliably track the multi-word chunks. Taken together, these results suggest that delta-band neural responses to speech cannot be fully explained by the semantic properties of single words and are potentially related to the neural representation of multi-word chunks.


Assuntos
Idioma , Semântica , Percepção Auditiva/fisiologia , Eletroencefalografia , Humanos , Fala/fisiologia
3.
Sheng Li Xue Bao ; 71(6): 935-945, 2019 Dec 25.
Artigo em Chinês | MEDLINE | ID: mdl-31879748

RESUMO

Speech comprehension is a central cognitive function of the human brain. In cognitive neuroscience, a fundamental question is to understand how neural activity encodes the acoustic properties of a continuous speech stream and resolves multiple levels of linguistic structures at the same time. This paper reviews the recently developed research paradigms that employ electroencephalography (EEG) or magnetoencephalography (MEG) to capture neural tracking of acoustic features or linguistic structures of continuous speech. This review focuses on two questions in speech processing: (1) The encoding of continuously changing acoustic properties of speech; (2) The representation of hierarchical linguistic units, including syllables, words, phrases and sentences. Studies have found that the low-frequency cortical activity tracks the speech envelope. In addition, the cortical activities on different time scales track multiple levels of linguistic units and constitute a representation of hierarchically organized linguistic units. The article reviewed these studies, which provided new insights into the processes of continuous speech in the human brain.


Assuntos
Eletroencefalografia , Magnetoencefalografia , Fala , Estimulação Acústica , Humanos , Fala/fisiologia , Percepção da Fala
4.
Neuroimage ; 192: 66-75, 2019 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-30822469

RESUMO

Recognizing speech in noisy environments is a challenging task that involves both auditory and language mechanisms. Previous studies have demonstrated human auditory cortex can reliably track the temporal envelope of speech in noisy environments, which provides a plausible neural basis for noise-robust speech recognition. The current study aimed at teasing apart auditory and language contributions to noise-robust envelope tracking by comparing the neural responses of 2 groups of listeners, i.e., native listeners and foreign listeners who did not understand the testing language. In the experiment, speech signals were mixed with spectrally matched stationary noise at 4 intensity levels and listeners' neural responses were recorded using electroencephalography (EEG). When the noise intensity increased, the neural response gain increased in both groups of listeners, demonstrating auditory gain control. Language comprehension generally reduced the response gain and envelope-tracking precision, and modulated the spatial and temporal profile of envelope-tracking activity. Based on the spatio-temporal dynamics of envelope-tracking activity, a linear classifier can jointly decode the 2 listener groups and 4 levels of noise intensity. Altogether, the results showed that without feedback from language processing, auditory mechanisms such as gain control can lead to a noise-robust speech representation. High-level language processing modulated the spatio-temporal profile of the neural representation of speech envelope, instead of generally enhancing the envelope representation.


Assuntos
Encéfalo/fisiologia , Idioma , Ruído , Percepção da Fala/fisiologia , Adolescente , Adulto , Compreensão/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
5.
J Neurosci ; 38(5): 1178-1188, 2018 01 31.
Artigo em Inglês | MEDLINE | ID: mdl-29255005

RESUMO

How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention.SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention.


Assuntos
Atenção/fisiologia , Conhecimento , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Estimulação Acústica , Adolescente , Adulto , Testes com Listas de Dissílabos , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Humanos , Idioma , Masculino , Fonética , Estimulação Luminosa , Desempenho Psicomotor , Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...