Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 99
Filtrar
1.
JASA Express Lett ; 4(6)2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38847582

RESUMO

The automatic classification of phonation types in singing voice is essential for tasks such as identification of singing style. In this study, it is proposed to use wavelet scattering network (WSN)-based features for classification of phonation types in singing voice. WSN, which has a close similarity with auditory physiological models, generates acoustic features that greatly characterize the information related to pitch, formants, and timbre. Hence, the WSN-based features can effectively capture the discriminative information across phonation types in singing voice. The experimental results show that the proposed WSN-based features improved phonation classification accuracy by at least 9% compared to state-of-the-art features.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38669173

RESUMO

Many acoustic features and machine learning models have been studied to build automatic detection systems to distinguish dysarthric speech from healthy speech. These systems can help to improve the reliability of diagnosis. However, speech recorded for diagnosis in real-life clinical conditions can differ from the training data of the detection system in terms of, for example, recording conditions, speaker identity, and language. These mismatches may lead to a reduction in detection performance in practical applications. In this study, we investigate the use of the wav2vec2 model as a feature extractor together with a support vector machine (SVM) classifier to build automatic detection systems for dysarthric speech. The performance of the wav2vec2 features is evaluated in two cross-database scenarios, language-dependent and language-independent, to study their generalizability to unseen speakers, recording conditions, and languages before and after fine-tuning the wav2vec2 model. The results revealed that the fine-tuned wav2vec2 features showed better generalization in both scenarios and gave an absolute accuracy improvement of 1.46% - 8.65% compared to the non-fine-tuned wav2vec2 features.

3.
J Voice ; 2022 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-36424242

RESUMO

Neurogenic voice disorders (NVDs) are caused by damage or malfunction of the central or peripheral nervous system that controls vocal fold movement. In this paper, we investigate the potential of the Fisher vector (FV) encoding in automatic detection of people with NVDs. FVs are used to convert features from frame level (local descriptors) to utterance level (global descriptors). At the frame level, we extract two popular cepstral representations, namely, Mel-frequency cepstral coefficients (MFCCs) and perceptual linear prediction cepstral coefficients (PLPCCs), from acoustic voice signals. In addition, the MFCC features are also extracted from every frame of the glottal source signal computed using a glottal inverse filtering (GIF) technique. The global descriptors derived from the local descriptors are used to train a support vector machine (SVM) classifier. Experiments are conducted using voice signals from 80 healthy speakers and 80 patients with NVDs (40 with spasmodic dysphonia (SD) and 40 with recurrent laryngeal nerve palsy (RLNP)) taken from the Saarbruecken voice disorder (SVD) database. The overall results indicate that the use of the FV encoding leads to better identification of people with NVDs, compared to the defacto temporal encoding. Furthermore, the SVM trained using the combination of FVs derived from the cepstral and glottal features provides the overall best detection performance.

4.
Sensors (Basel) ; 22(13)2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35808423

RESUMO

Understanding of the perception of emotions or affective states in humans is important to develop emotion-aware systems that work in realistic scenarios. In this paper, the perception of emotions in naturalistic human interaction (audio-visual data) is studied using perceptual evaluation. For this purpose, a naturalistic audio-visual emotion database collected from TV broadcasts such as soap-operas and movies, called the IIIT-H Audio-Visual Emotion (IIIT-H AVE) database, is used. The database consists of audio-alone, video-alone, and audio-visual data in English. Using data of all three modes, perceptual tests are conducted for four basic emotions (angry, happy, neutral, and sad) based on category labeling and for two dimensions, namely arousal (active or passive) and valence (positive or negative), based on dimensional labeling. The results indicated that the participants' perception of emotions was remarkably different between the audio-alone, video-alone, and audio-video data. This finding emphasizes the importance of emotion-specific features compared to commonly used features in the development of emotion-aware systems.


Assuntos
Nível de Alerta , Emoções , Humanos
5.
J Voice ; 2022 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-35490081

RESUMO

Automatic voice pathology detection is a research topic, which has gained increasing interest recently. Although methods based on deep learning are becoming popular, the classical pipeline systems based on a two-stage architecture consisting of a feature extraction stage and a classifier stage are still widely used. In these classical detection systems, frame-wise computation of mel-frequency cepstral coefficients (MFCCs) is the most popular feature extraction method. However, no systematic study has been conducted to investigate the effect of the MFCC frame length on automatic voice pathology detection. In this work, we studied the effect of the MFCC frame length in voice pathology detection using three disorders (hyperkinetic dysphonia, hypokinetic dysphonia and reflux laryngitis) from the Saarbrücken Voice Disorders (SVD) database. The detection performance was compared between speaker-dependent and speaker-independent scenarios as well as between speaking task -dependent and speaking task -independent scenarios. The Support Vector Machine, which is the most widely used classifier in the study area, was used as the classifier. The results show that the detection accuracy depended on the MFFC frame length in all the scenarios studied. The best detection accuracy was obtained by using a MFFC frame length of 500 ms with a shift of 5 ms.

6.
J Voice ; 35(5): 807.e1-807.e23, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-32305174

RESUMO

Twang-like vocal qualities have been related to a megaphone-like shape of the vocal tract (epilaryngeal tube and pharyngeal narrowing, and a wider mouth opening), low-frequency spectral changes, and tighter and/or increased vocal fold adduction. Previous studies have focused mainly on loud and high-pitched singing, comfortable low-pitched spoken vowels, or are based on modeling and simulation. There is no data available related to twang-like voices in loud, low-pitched singing. PURPOSE: This study investigates the possible contribution of the lower and upper vocal tract configurations during loud twang-like singing on high and low pitches in a real subject. METHODS: One male contemporary commercial music singer produced a sustained vowel [a:] in his habitual speaking pitch (B2) and loudness. The same vowel was also produced in a loud twang-like singing voice on high (G4) and low pitches (B2). Computerized tomography, acoustic analysis, inverse filtering, and audio-perceptual assessments were performed. RESULTS: Both loud twang-like voices showed a megaphone-like shape of the vocal tract, being more notable on the low pitch. Also, low-frequency spectral changes, a peak of sound energy around 3 kHz and increased vocal fold adduction were found. Results agreed with audio-perceptual evaluation. CONCLUSIONS: Loud twang-like phonation seems to be mainly related to low-frequency spectral changes (under 2 kHz) and a more compact formant structure. Twang-like qualities seem to require different degrees of twang-related vocal tract adjustments while phonating in different pitches. A wider mouth opening, pharyngeal constriction, and epilaryngeal tube narrowing may be helpful strategies for maximum power transfer and improved vocal economy in loud contemporary commercial music singing and potentially in loud speech. Further studies should focus on vocal efficiency and vocal economy measurements using modeling and simulation, based on real-singers' data.


Assuntos
Canto , Voz , Acústica , Humanos , Masculino , Fonação , Qualidade da Voz
7.
J Acoust Soc Am ; 148(2): EL141, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32873022

RESUMO

Voiced speech is generated by the glottal flow interacting with vocal fold vibrations. However, the details of vibrations in the anterior-posterior direction (the so-called zipper-effect) and their correspondence with speech and other glottal signals are not fully understood due to challenges in direct measurements of vocal fold vibrations. In this proof-of-concept study, the potential of four parameters extracted from high-speed videoendoscopy (HSV), electroglottography, and speech signals to indicate the presence of a zipper-type glottal opening is investigated. Comparison with manual labeling of the HSV videos highlighted the importance of multiple parameter-signal pairs in indicating the presence of a zipper-type glottal opening.


Assuntos
Fonação , Voz , Glote , Fala , Vibração , Prega Vocal
8.
Int J Psychophysiol ; 147: 72-82, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31743699

RESUMO

The purpose of this study was to examine the efficacy of three days of listen-and-repeat training on the perception and production of vowel duration contrasts. Generalization to an untrained vowel and a non-linguistic sound was also examined. Twelve adults underwent four sessions of listen-and-repeat training over two days with the pseudoword contrast /tite/-/ti:te/. Generalization effects were examined with another vowel contrast, /tote/-/to:te/ and a sinusoidal tone pair as a non-linguistic stimulus. Learning effects were measured with psychophysiological (EEG) event-related potentials (mismatch negativity and N1), behavioral discrimination tasks and production tasks. The results showed clear improvement in all perception measurements for the trained stimuli. The effects also affected the untrained vowel by eliciting an N1 response, and affected the behavioral perception of the non-linguistic stimuli. The MMN response for the untrained linguistic stimuli, however, did not increase. These findings suggest that the training was able to increase the sensitivity of preattentive auditory duration discrimination, but that phoneme-specific spectral information may also be needed to shape the neural representation of phoneme categories.


Assuntos
Discriminação Psicológica/fisiologia , Potenciais Evocados/fisiologia , Multilinguismo , Prática Psicológica , Psicolinguística , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
9.
J Acoust Soc Am ; 146(5): EL418, 2019 11.
Artigo em Inglês | MEDLINE | ID: mdl-31795672

RESUMO

Existing studies in classification of phonation types in singing use voice source features and Mel-frequency cepstral coefficients (MFCCs) showing poor performance due to high pitch in singing. In this study, high-resolution spectra obtained using the zero-time windowing (ZTW) method is utilized to capture the effect of voice excitation. ZTW does not call for computing the source-filter decomposition (which is needed by many voice source features) which makes it robust to high pitch. For the classification, the study proposes extracting MFCCs from the ZTW spectrum. The results show that the proposed features give a clear improvement in classification accuracy compared to the existing features.

10.
J Acoust Soc Am ; 146(4): 2501, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671985

RESUMO

In the production of voiced speech, glottal flow skewing refers to the tilting of the glottal flow pulses to the right, often characterized as a delay of the peak, compared to the glottal area. In the past four decades, several studies have addressed this phenomenon using modeling of voice production with analog circuits and computer simulations. However, previous studies measuring flow skewing in natural production of speech are sparse and they contain little quantitative data about the degree of skewing between flow and area. In the current study, flow skewing was measured from the natural production of 40 vowel utterances produced by 10 speakers. Glottal flow was measured from speech using glottal inverse filtering and glottal area was captured with high-speed videoendoscopy. The estimated glottal flow and area waveforms were parameterized with four robust parameters that measure pulse skewness quantitatively. Statistical tests obtained for all four parameters showed that the flow pulse was significantly more skewed to the right than the area pulse. Hence, this study corroborates the existence of flow skewing using measurements from natural speech production. In addition, the study yields quantitative data about pulse skewness in simultaneous measured glottal flow and area in natural production of speech.


Assuntos
Glote/fisiologia , Fonação/fisiologia , Fala/fisiologia , Voz/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Acústica da Fala , Medida da Produção da Fala
11.
J Acoust Soc Am ; 142(3): 1542, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28964072

RESUMO

Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

12.
J Acoust Soc Am ; 141(4): EL327, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28464691

RESUMO

Estimation of the spectral tilt of the glottal source has several applications in speech analysis and modification. However, direct estimation of the tilt from telephone speech is challenging due to vocal tract resonances and distortion caused by speech compression. In this study, a deep neural network is used for the tilt estimation from telephone speech by training the network with tilt estimates computed by glottal inverse filtering. An objective evaluation shows that the proposed technique gives more accurate estimates for the spectral tilt than previously used techniques that estimate the tilt directly from telephone speech without glottal inverse filtering.


Assuntos
Acústica , Aprendizado Profundo , Glote/fisiologia , Processamento de Sinais Assistido por Computador , Acústica da Fala , Medida da Produção da Fala/métodos , Telefone , Qualidade da Voz , Feminino , Humanos , Masculino , Fonação , Espectrografia do Som
13.
J Voice ; 31(4): 508.e11-508.e16, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-27856093

RESUMO

OBJECTIVE: This study aimed to assess teachers' voice symptoms and noise in schools in Upper Egypt and to study possible differences between teachers in public and private schools. STUDY DESIGN: A cross-sectional analysis via questionnaire was carried out. METHODS: Four schools were chosen randomly to represent primary and preparatory schools as well as public and private ones. In these schools, a total of 140 teachers participated in the study. They answered a questionnaire on vocal and throat symptoms and their effects on working and social activities, as well as levels and effects of experienced noise. RESULTS: Of all teachers, 47.9% reported moderate or severe dysphonia within the last 6 months, and 21.4% reported daily dysphonia. All teachers reported frequent feelings of being in noise, with 82.2% feeling it sometimes or always during the working day, resulting in a need to raise their voice. Teachers in public schools experienced more noise from nearby classes. CONCLUSION: The working conditions and vocal health of teachers in Upper Egypt, especially in public schools, are alarming.


Assuntos
Ruído , Doenças Profissionais/epidemiologia , Exposição Ocupacional/estatística & dados numéricos , Instituições Acadêmicas/estatística & dados numéricos , Distúrbios da Voz/epidemiologia , Adulto , Estudos Transversais , Egito/epidemiologia , Feminino , Humanos , Percepção Sonora , Masculino , Pessoa de Meia-Idade , Adulto Jovem
14.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26477651

RESUMO

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Magnetoencefalografia , Masculino , Processamento de Sinais Assistido por Computador , Inteligibilidade da Fala/fisiologia , Adulto Jovem
15.
Eur J Neurosci ; 43(6): 738-50, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26647120

RESUMO

Effective speech sound discrimination at preschool age is known to be a prerequisite for the development of language skills and later literacy acquisition. However, the speech specificity of cortical discrimination skills in small children is currently not known, as previous research has either studied speech functions without comparison with non-speech sounds, or used much simpler sounds such as harmonic or sinusoidal tones as control stimuli. We investigated the cortical discrimination of five syllable features (consonant, vowel, vowel duration, fundamental frequency, and intensity), covering both segmental and prosodic phonetic changes, and their acoustically matched non-speech counterparts in 63 6-year-old typically developed children, by using a multi-feature mismatch negativity (MMN) paradigm. Each of the five investigated features elicited a unique pattern of differentiating negativities: an early differentiating negativity, MMN, and a late differentiating negativity. All five studied features showed speech-related enhancement of at least one of these responses, suggesting experience-related neural commitment in both phonetic and prosodic speech processing. In addition, the cognitive performance and language skills of the children were tested extensively. The speech-related neural enhancement was positively associated with the level of performance in several neurocognitive tasks, indicating a relationship between successful establishment of cortical memory traces for speech and enhanced cognitive functioning. The results contribute to the understanding of typical developmental trajectories of linguistic vs. non-linguistic auditory skills, and provide a reference for future studies investigating deficits in language-related disorders at preschool age.


Assuntos
Córtex Cerebral/fisiologia , Cognição , Discriminação Psicológica , Percepção da Fala , Córtex Cerebral/crescimento & desenvolvimento , Pré-Escolar , Feminino , Humanos , Desenvolvimento da Linguagem , Masculino
16.
J Acoust Soc Am ; 137(6): 3356-65, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-26093425

RESUMO

Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.


Assuntos
Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Localização de Som , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Humanos , Magnetoencefalografia , Masculino , Psicoacústica , Detecção de Sinal Psicológico , Espectrografia do Som
17.
Front Hum Neurosci ; 8: 279, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24860470

RESUMO

According to the Perceptual Assimilation Model (PAM), articulatory similarity/dissimilarity between sounds of the second language (L2) and the native language (L1) governs L2 learnability in adulthood and predicts L2 sound perception by naïve listeners. We performed behavioral and neurophysiological experiments on two groups of university students at the first and fifth years of the English language curriculum and on a group of naïve listeners. Categorization and discrimination tests, as well as the mismatch negativity (MMN) brain response to L2 sound changes, showed that the discriminatory capabilities of the students did not significantly differ from those of the naïve subjects. In line with the PAM model, we extend the findings of previous behavioral studies showing that, at the neural level, classroom instruction in adulthood relies on assimilation of L2 vowels to L1 phoneme categories and does not trigger improvement in L2 phonetic discrimination. Implications for L2 classroom teaching practices are discussed.

18.
Dev Neuropsychol ; 38(8): 550-66, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24219695

RESUMO

Identifying children at risk for reading problems or dyslexia at kindergarten age could improve support for beginning readers. Brain event-related potentials (ERPs) were measured for temporally complex pseudowords and corresponding non-speech stimuli from 6.5-year-old children who participated in behavioral literacy tests again at 9 years in the second grade. Children who had reading problems at school age had larger N250 responses to speech and non-speech stimuli particularly at the left hemisphere. The brain responses also correlated with reading skills. The results suggest that atypical auditory and speech processing are a neural-level risk factor for future reading problems. [Supplementary material is available for this article. Go to the publisher's online edition of Developmental Neuropsychology for the following free supplemental resources: Sound files used in the experiments. Three speech sounds and corresponding non-speech sounds with short, intermediate, and long gaps].


Assuntos
Dislexia/diagnóstico , Potenciais Evocados Auditivos/fisiologia , Leitura , Percepção da Fala/fisiologia , Estimulação Acústica , Análise de Variância , Encéfalo , Mapeamento Encefálico , Estudos de Casos e Controles , Criança , Dislexia/fisiopatologia , Dislexia/psicologia , Eletroencefalografia , Feminino , Humanos , Masculino , Fonética , Fala
19.
Cogn Neurosci ; 4(2): 99-106, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24073735

RESUMO

This study evaluated whether the linguistic multi-feature paradigm with five types of speech-sound changes and novel sounds is an eligible neurophysiologic measure of central auditory processing in toddlers. Participants were 18 typically developing 2-year-old children. Syllable stimuli elicited significant obligatory responses and syllable changes significant MMN (mismatch negativity) which suggests that toddlers can discriminate auditory features from alternating speech-sound stream. The MMNs were lateralized similarly as found earlier in adults. Furthermore, novel sounds elicited a significant novelty P3 response. Thus, the linguistic multi-feature paradigm with novel sounds is feasible for the concurrent investigation of the different stages of central auditory processing in 2-year-old children, ranging from pre-attentive encoding and discrimination of stimuli to attentional mechanisms in speech-like research compositions. As a conclusion, this time-efficient paradigm can be applied to investigating central auditory development and impairments in toddlers in whom developmental changes of speech-related cortical functions and language are rapid.


Assuntos
Potenciais Evocados Auditivos/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica/métodos , Pré-Escolar , Eletroencefalografia , Estudos de Viabilidade , Humanos , Lactente
20.
J Acoust Soc Am ; 134(2): 1295-313, 2013 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-23927127

RESUMO

All-pole modeling is a widely used formant estimation method, but its performance is known to deteriorate for high-pitched voices. In order to address this problem, several all-pole modeling methods robust to fundamental frequency have been proposed. This study compares five such previously known methods and introduces a technique, Weighted Linear Prediction with Attenuated Main Excitation (WLP-AME). WLP-AME utilizes temporally weighted linear prediction (LP) in which the square of the prediction error is multiplied by a given parametric weighting function. The weighting downgrades the contribution of the main excitation of the vocal tract in optimizing the filter coefficients. Consequently, the resulting all-pole model is affected more by the characteristics of the vocal tract leading to less biased formant estimates. By using synthetic vowels created with a physical modeling approach, the results showed that WLP-AME yields improved formant frequencies for high-pitched sounds in comparison to the previously known methods (e.g., relative error in the first formant of the vowel [a] decreased from 11% to 3% when conventional LP was replaced with WLP-AME). Experiments conducted on natural vowels indicate that the formants detected by WLP-AME changed in a more regular manner between repetitions of different pitch than those computed by conventional LP.


Assuntos
Glote/fisiologia , Modelos Lineares , Fonação , Fonética , Percepção da Altura Sonora , Acústica da Fala , Qualidade da Voz , Adulto , Algoritmos , Fenômenos Biomecânicos , Pré-Escolar , Simulação por Computador , Feminino , Glote/anatomia & histologia , Humanos , Masculino , Análise Numérica Assistida por Computador , Reconhecimento Automatizado de Padrão , Pressão , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Medida da Produção da Fala , Fatores de Tempo , Prega Vocal/fisiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...