Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Folia Phoniatr Logop ; 67(2): 83-9, 2015.
Article in English | MEDLINE | ID: mdl-26562816

ABSTRACT

OBJECTIVE: It has been shown previously that congenitally blind francophone adults had higher auditory discrimination scores than sighted adults. It is unclear, however, if, compared to their sighted peers, blind speakers display an increased ability to detect anticipatory acoustic cues. In this paper, this ability is investigated in both speaker groups. METHODS: Using the gating paradigm, /izi/ and /izy/ sequences were truncated to include a variable duration of the vowel. The sequences were used as stimuli in an auditory identification test. Seventeen congenitally blind adults (9 females and 8 males) and 17 sighted controls were recruited. Their task was to identify the second vowel of the sequence. RESULTS: Results show that all participants could reliably identify the rounded vowel prior to its acoustic onset, but steeper identification slopes were found for sighted listeners than for blind listeners. CONCLUSION: The difference in identification slopes likely suggests that sighted speakers display finer abilities to perceptually follow the decreasing values of the frication noise, compared to blind speakers.


Subject(s)
Anticipation, Psychological , Blindness/congenital , Blindness/physiopathology , Cues , Phonetics , Speech Acoustics , Speech Perception/physiology , Adult , Female , Humans , Male , Middle Aged , Sound Spectrography
2.
J Acoust Soc Am ; 133(4): EL249-55, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23556687

ABSTRACT

In this paper, anticipatory co-articulation of the lip protrusion and constriction gestures is investigated in speakers with visual deprivation. Audio-visual recordings of 11 congenitally blind French speakers producing [V-roundC-roundV+round] sequences were measured with a lip shape tracking system. Lip protrusion and constriction values and their relative timings were analyzed. Results show that despite the reduced magnitude of lip protrusion and constriction area in blind speakers, the timing of the anticipatory gestures can be appropriately modeled by the Movement Expansion Model [from Abry and Lallouache (1995a). Bul. de la Comm. Parlée 3, 85-99; (1995b). Proceedings of ICPHS, pp. 152-155; Noiray et al. (2011). J. Acoust. Soc. Am. 129, 340-349], which predicts lawful anticipatory behavior expanding linearly as the intervocalic consonant interval increases.


Subject(s)
Anticipation, Psychological , Blindness/psychology , Gestures , Linear Models , Lip/physiology , Phonetics , Speech Acoustics , Voice Quality , Adult , Audiometry, Pure-Tone , Audiometry, Speech , Biomechanical Phenomena , Blindness/congenital , Female , Humans , Lip/anatomy & histology , Male , Middle Aged , Reproducibility of Results , Signal Processing, Computer-Assisted , Sound Spectrography , Time Factors , Video Recording
3.
Exp Brain Res ; 227(2): 275-88, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23591689

ABSTRACT

The concept of an internal forward model that internally simulates the sensory consequences of an action is a central idea in speech motor control. Consistent with this hypothesis, silent articulation has been shown to modulate activity of the auditory cortex and to improve the auditory identification of concordant speech sounds, when embedded in white noise. In the present study, we replicated and extended this behavioral finding by showing that silently articulating a syllable in synchrony with the presentation of a concordant auditory and/or visually ambiguous speech stimulus improves its identification. Our results further demonstrate that, even in the case of perfect perceptual identification, concurrent mouthing of a syllable speeds up the perceptual processing of a concordant speech stimulus. These results reflect multisensory-motor interactions during speech perception and provide new behavioral arguments for internally generated sensory predictions during silent speech production.


Subject(s)
Language , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Decision Making , Female , Humans , Male , Neuropsychological Tests , Phonetics , Photic Stimulation , Reaction Time , Speech Production Measurement , Time Factors , Young Adult
4.
J Acoust Soc Am ; 129(1): 340-9, 2011 Jan.
Article in English | MEDLINE | ID: mdl-21303015

ABSTRACT

The modeling of anticipatory coarticulation has been the subject of longstanding debates for more than 40 yr. Empirical investigations in the articulatory domain have converged toward two extreme modeling approaches: a maximal anticipation behavior (Look-ahead model) or a fixed pattern (Time-locked model). However, empirical support for any of these models has been hardly conclusive, both within and across languages. The present study tested the temporal organization of vocalic anticipatory coarticulation of the rounding feature from [i] to [u] transitions for adult speakers of American English and Canadian French. Articulatory data were synchronously recorded using an Optotrak for lip protrusion and a dedicated Lip-Shape-Tracking-System for lip constriction. Results show that (i) protrusion is an inconsistent parameter for tracking anticipatory rounding gestures across individuals, more specifically in English; (ii) labial constriction (between-lip area) is a more reliable correlate, allowing for the description of vocalic rounding in both languages; (iii) when tested on the constriction component, speakers show a lawful anticipatory behavior expanding linearly as the intervocalic consonant interval increases from 0 to 5 consonants. The Movement Expansion Model from Abry and Lallouache [(1995a) Bul. de la Comm. Parlée 3, 85-99; (1995b) Proceedings of ICPHS 4, 152-155.] predicted such a regular behavior, i.e., a lawful variability with a speaker-specific expansion rate, which is not language-specific.


Subject(s)
Anticipation, Psychological , Language , Lip/physiology , Phonetics , Speech Acoustics , Adult , Biomechanical Phenomena , Female , Humans , Image Processing, Computer-Assisted , Male , Models, Biological , Time Factors , Video Recording , Young Adult
5.
Percept Psychophys ; 68(3): 458-74, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16900837

ABSTRACT

Perceptual changes are experienced during rapid and continuous repetition of a speech form, leading to an auditory illusion known as the verbal transformation effect. Although verbal transformations are considered to reflect mainly the perceptual organization and interpretation of speech, the present study was designed to test whether or not speech production constraints may participate in the emergence of verbal representations. With this goal in mind, we examined whether variations in the articulatory cohesion of repeated nonsense words--specifically, temporal relationships between articulatory events--could lead to perceptual asymmetries in verbal transformations. The first experiment displayed variations in timing relations between two consonantal gestures embedded in various nonsense syllables in a repetitive speech production task. In the second experiment, French participants repeatedly uttered these syllables while searching for verbal transformation. Syllable transformation frequencies followed the temporal clustering between consonantal gestures: The more synchronized the gestures, the more stable and attractive the syllable. In the third experiment, which involved a covert repetition mode, the pattern was maintained without external speech movements. However, when a purely perceptual condition was used in a fourth experiment, the previously observed perceptual asymmetries of verbal transformations disappeared. These experiments demonstrate the existence of an asymmetric bias in the verbal transformation effect linked to articulatory control constraints. The persistence of this effect from an overt to a covert repetition procedure provides evidence that articulatory stability constraints originating from the action system may be involved in auditory imagery. The absence of the asymmetric bias during a purely auditory procedure rules out perceptual mechanisms as a possible explanation of the observed asymmetries.


Subject(s)
Phonetics , Speech Perception , Verbal Behavior , Humans , Illusions , Sound Spectrography , Speech
6.
Neuroimage ; 23(3): 1143-51, 2004 Nov.
Article in English | MEDLINE | ID: mdl-15528113

ABSTRACT

We used functional magnetic resonance imaging (fMRI) to localize the brain areas involved in the imagery analogue of the verbal transformation effect, that is, the perceptual changes that occur when a speech form is cycled in rapid and continuous mental repetition. Two conditions were contrasted: a baseline condition involving the simple mental repetition of speech sequences, and a verbal transformation condition involving the mental repetition of the same items with an active search for verbal transformation. Our results reveal a predominantly left-lateralized network of cerebral regions activated by the verbal transformation task, similar to the neural network involved in verbal working memory: the left inferior frontal gyrus, the left supramarginal gyrus, the left superior temporal gyrus, the anterior part of the right cingulate cortex, and the cerebellar cortex, bilaterally. Our results strongly suggest that the imagery analogue of the verbal transformation effect, which requires percept analysis, form interpretation, and attentional maintenance of verbal material, relies on a working memory module sharing common components of speech perception and speech production systems.


Subject(s)
Brain/physiology , Speech/physiology , Verbal Behavior/physiology , Adult , Echo-Planar Imaging , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging , Male , Memory, Short-Term/physiology , Models, Statistical , Psychomotor Performance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...