Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
J Acoust Soc Am ; 108(4): 1421-34, 2000 Oct.
Article in English | MEDLINE | ID: mdl-11051468

ABSTRACT

A decomposition algorithm that uses a pitch-scaled harmonic filter was evaluated using synthetic signals and applied to mixed-source speech, spoken by three subjects, to separate the voiced and unvoiced parts. Pulsing of the noise component was observed in voiced frication, which was analyzed by complex demodulation of the signal envelope. The timing of the pulsation, represented by the phase of the anharmonic modulation coefficient, showed a step change during a vowel-fricative transition corresponding to the change in location of the noise source within the vocal tract. Analysis of fricatives [see text] demonstrated a relationship between steady-state phase and place, and f0 glides confirmed that the main cause was a place-dependent delay.


Subject(s)
Phonetics , Sound Spectrography , Speech Acoustics , Voice Quality , Humans
2.
J Acoust Soc Am ; 104(2 Pt 1): 1075-84, 1998 Aug.
Article in English | MEDLINE | ID: mdl-9714926

ABSTRACT

The open end correction coefficient (OECC) of acoustic tubes has been widely investigated under a free-field condition. This study examines OECCs in confined regions, such as side branches within the vocal tract. To do this, a number of mechanical acoustic models are used to examine the effects of the angle of the branch axis and the proximity of the walls of the main tract to the open end of the branch. The OECC is estimated by matching both the peaks and troughs (i.e., spectral maxima and minima) of the computed and measured transfer functions for each model. The results indicate that the OECC of a side branch depends on L/D, where L is the cross dimension of the main tract at the branching point, and D is the branch diameter. For side branches connected to the main tract through a narrow neck, the OECC of each end of the neck is determined using the ratio of the radius of the neck to that of the adjacent section. Two empirical equations for evaluating the OECC within a tract are derived from the present study. Finally, the range of appropriate OECC values for estimating an accurate vocal tract transfer function is discussed, based on the results presented here and morphologic measurements reported previously.


Subject(s)
Ear/physiology , Larynx/physiology , Nasal Cavity/physiology , Humans , Models, Biological , Neck/anatomy & histology
3.
Eur J Disord Commun ; 30(2): 149-60, 1995.
Article in English | MEDLINE | ID: mdl-7492846

ABSTRACT

Electropalatography (EPG) is a useful tool for investigating tongue dynamics in experimental phonetic research and speech therapy. However, data provided by EPG are a two-dimensional representation in which all absolute positional information is lost. This paper presents an enhanced EPG (eEPG) system which uses digitised palate shape data to display the tongue-palate contact pattern in three dimensions. The palate shapes are obtained using a colour-encoded structured light three-dimensional digitisation system. The three-dimensional palate shape is displayed on a Silicon Graphics workstation as a surface made up of polygons represented by a quadrilateral mesh. EPG contact patterns are superimposed on to the three-dimensional palate shape by displaying the relevant polygons in a different colour. By using this system, differences in shape between individual palates, apparent on visual inspection of the actual palates, are also apparent in the image on screen. Further, methods have been devised for computing absolute distances along paths lying on the palate surface. Combining this with calibrated palate shape data allows accurate measurements to be made between contact locations on the palate. These have been validated with manual measurements. In addition, vocal tract areas in the oral cavity have been estimated by using the absolute measurements on the palate for a given contact pattern, and assuming a flat tongue profile in the uncontacted area.


Subject(s)
Electrodiagnosis/methods , Image Processing, Computer-Assisted/methods , Palate/physiology , Speech Acoustics , Anthropometry/methods , Female , Humans , Male , Palate/anatomy & histology
4.
J Acoust Soc Am ; 78(5): 1562-7, 1985 Nov.
Article in English | MEDLINE | ID: mdl-4067070

ABSTRACT

High vowels have a higher intrinsic fundamental frequency (F0) than low vowels. This phenomenon has been verified in several languages. However, most studies of intrinsic F0 of vowels have used words either in isolation or bearing the main phrasal stress in a carrier sentence. As a first step towards an understanding of how the intrinsic F0 of vowels interacts with intonation in running speech, this study examined F0 of the vowels [i,a,u] in four sentence positions. The four speakers used for this study showed a statistically significant main effect of intrinsic F0 (high vowels had higher F0). Three of the four speakers also showed an interaction between intrinsic F0 and sentence position such that no significant F0 difference was observed in the unaccented, sentence-final position. The interaction was shown not to be due to vowel neutralization or correlated with changes in the glottal waveform shape, as evidenced by measures of the first formant frequency and spectral slope. Comparison with studies of tone languages and speech of the deaf suggests that both the lack of accent and the lower F0 caused the reduction in the intrinsic F0 difference.


Subject(s)
Speech Acoustics , Speech , Female , Humans , Language , Male
5.
J Acoust Soc Am ; 66(5): 1325-32, 1979 Nov.
Article in English | MEDLINE | ID: mdl-500970

ABSTRACT

A recent study [Olive and Spickenagel, J. Acoust. Soc. Am. 59, 993-996 (1976)] has shown that area parameters derived from linear prediction analysis can be linearly interpolated between dyad boundaries with very little distortion in the resultant synthesized speech. The success of area parameter interpolation raises a question: can other acoustic parameters, such as the power spectrum of the speech waveform, be similarly interpolated? The spectrum is of special interest because speech can be synthesized in real time from spectral parameters on a programmable digital filter. To study this question a speech analysis-synthesis system using spectral parameters (samples of power spectra at different frequencies) was simulated. These parameters were determined from the speech signal at every dyad boundary, and interpolated for intermediate values. Dyad boundaries (representing the limits of transition regions between phonemes) were determined manually. Informal listening tests comparing synthetic speech with and without linear interpolation showed slight degradation in the interpolated speech. This degradation is significantly reduced by using an additional point within the dyad boundaries for interpolation.


Subject(s)
Speech , Acoustics , Hearing Tests , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...