Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Data ; 8(1): 187, 2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-34285240

RESUMO

Real-time magnetic resonance imaging (RT-MRI) of human speech production is enabling significant advances in speech science, linguistics, bio-inspired speech technology development, and clinical applications. Easy access to RT-MRI is however limited, and comprehensive datasets with broad access are needed to catalyze research across numerous domains. The imaging of the rapidly moving articulators and dynamic airway shaping during speech demands high spatio-temporal resolution and robust reconstruction methods. Further, while reconstructed images have been published, to-date there is no open dataset providing raw multi-coil RT-MRI data from an optimized speech production experimental setup. Such datasets could enable new and improved methods for dynamic image reconstruction, artifact correction, feature extraction, and direct extraction of linguistically-relevant biomarkers. The present dataset offers a unique corpus of 2D sagittal-view RT-MRI videos along with synchronized audio for 75 participants performing linguistically motivated speech tasks, alongside the corresponding public domain raw RT-MRI data. The dataset also includes 3D volumetric vocal tract MRI during sustained speech sounds and high-resolution static anatomical T2-weighted upper airway MRI for each participant.


Assuntos
Laringe/fisiologia , Imageamento por Ressonância Magnética/métodos , Fala , Adolescente , Adulto , Sistemas Computacionais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores de Tempo , Gravação em Vídeo , Adulto Jovem
2.
J Phon ; 772019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32863471

RESUMO

In producing linguistic prominence, certain linguistic elements are highlighted relative to others in a given domain; focus is an instance of prominence in which speakers highlight new or important information. This study investigates prominence modulation at the sub-syllable level using a corrective focus task, examining acoustic duration and pitch with particular attention to the gestural composition of Korean tense and lax consonants. The results indicate that focus effects are manifested with systematic variations depending on the gestural structures, i.e. consonants, active during the domain of a focus gesture, that the patterns of focus modulation do not differ as a function of elicited focus positions within the syllable. The findings generally support the premise that the scope of the focus gesture is not (much) smaller than the interval of (CVC) syllable. Lastly, there is also some support for an interaction among prosodic gestures-focus gestures and pitch accentual gestures-at the phrase level. Overall, the current findings support the hypothesis that focus, implemented as a prosodic prominence gesture, modulates temporal characteristics of gestures, as well as possibly other prosodic gestures that are co-active in its the domain.

3.
J Acoust Soc Am ; 144(4): EL290, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30404513

RESUMO

Real-time magnetic resonance imaging (MRI) speech production data have expanded the understanding of vocal tract actions. This letter presents an Automatic Centroid Tracking tool, ACT, which obtains both spatial and temporal information characterizing multi-directional articulatory movement. ACT auto-segments an articulatory object composed of connected pixels in a real-time MRI video, by finding its intensity centroids over time and returns kinematic profiles including direction and magnitude information of the object. This letter discusses the utility of ACT, which outperforms other similar object tracking techniques, by demonstrating its successful online tracking of vertical larynx movement. ACT can be deployed generally for dynamic image processing and analysis.


Assuntos
Laringe/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Software , Fala , Voz , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...