Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 42
Filter
1.
J Exp Psychol Gen ; 152(6): 1598-1621, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36795429

ABSTRACT

To maintain efficiency during conversation, interlocutors form and retrieve memory representations for the shared understanding or common ground that they have with their partner. Here, an online referential communication task (RCT) was used in two experiments to examine whether the strength and type of common ground between dyads influence their ability to form and recall referential labels for images. Results from both experiments show a significant association between the strength of common ground formed between dyads for images during the RCT and their verbatim-but not semantic-recall memory for image descriptions about a week later. Participants who generated the image descriptions during the RCT also showed superior verbatim and semantic recall memory performance. In Experiment 2, a group of friends with pre-existing personal common ground were significantly more efficient in their use of words to describe images during the RCT than a group of strangers without personal common ground. However, personal common ground did not lead to enhanced recall memory performance. Together, these findings provide evidence that individuals can remember some verbatim words and phrases from conversations, and partially support the theoretical notion that common ground and memory are intricately linked conversational processes. The null findings with regard to semantic recall memory suggest that the structured nature of the RCT may have constrained the types of memory representations that individuals formed during the interaction. Findings are discussed in relation to the multidimensional nature of common ground and the importance of developing more natural conversational tasks for future work. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Communication , Memory , Humans , Mental Recall , Friends , Cognition
2.
Front Hum Neurosci ; 16: 905365, 2022.
Article in English | MEDLINE | ID: mdl-36092651

ABSTRACT

Sensory information, including auditory feedback, is used by talkers to maintain fluent speech articulation. Current models of speech motor control posit that speakers continually adjust their motor commands based on discrepancies between the sensory predictions made by a forward model and the sensory consequences of their speech movements. Here, in two within-subject design experiments, we used a real-time formant manipulation system to explore how reliant speech articulation is on the accuracy or predictability of auditory feedback information. This involved introducing random formant perturbations during vowel production that varied systematically in their spatial location in formant space (Experiment 1) and temporal consistency (Experiment 2). Our results indicate that, on average, speakers' responses to auditory feedback manipulations varied based on the relevance and degree of the error that was introduced in the various feedback conditions. In Experiment 1, speakers' average production was not reliably influenced by random perturbations that were introduced every utterance to the first (F1) and second (F2) formants in various locations of formant space that had an overall average of 0 Hz. However, when perturbations were applied that had a mean of +100 Hz in F1 and -125 Hz in F2, speakers demonstrated reliable compensatory responses that reflected the average magnitude of the applied perturbations. In Experiment 2, speakers did not significantly compensate for perturbations of varying magnitudes that were held constant for one and three trials at a time. Speakers' average productions did, however, significantly deviate from a control condition when perturbations were held constant for six trials. Within the context of these conditions, our findings provide evidence that the control of speech movements is, at least in part, dependent upon the reliability and stability of the sensory information that it receives over time.

3.
J Acoust Soc Am ; 148(6): 3709, 2020 12.
Article in English | MEDLINE | ID: mdl-33379900

ABSTRACT

In this study, both between-subject and within-subject variability in speech perception and speech production were examined in the same set of speakers. Perceptual acuity was determined using an ABX auditory discrimination task, whereby speakers made judgments between pairs of syllables on a /ɛ/ to /æ/ acoustic continuum. Auditory feedback perturbations of the first two formants were implemented in a production task to obtain measures of compensation, normal speech production variability, and vowel spacing. Speakers repeated the word "head" 120 times under varying feedback conditions, with the final Hold phase involving the strongest perturbations of +240 Hz in F1 and -300 Hz in F2. Multiple regression analyses were conducted to determine whether individual differences in compensatory behavior in the Hold phase could be predicted by perceptual acuity, speech production variability, and vowel spacing. Perceptual acuity significantly predicted formant changes in F1, but not in F2. These results are discussed in consideration of the importance of using larger sample sizes in the field and developing new methods to explore feedback processing at the individual participant level. The potential positive role of variability in speech motor control is also considered.

4.
Early Interv Psychiatry ; 12(6): 1217-1221, 2018 12.
Article in English | MEDLINE | ID: mdl-29235251

ABSTRACT

AIM: Psychotic-like experiences (PLEs) share several risk factors with psychotic disorders and confer greater risk of developing a psychotic disorder. Thus, individuals with PLEs not only comprise a valuable population in which to study the aetiology and premorbid changes associated with psychosis, but also represent a high-risk population that could benefit from clinical monitoring or early intervention efforts. METHOD: We examined the score distribution and factor structure of the current 15-item Community Assessment of Psychic Experiences-Positive Scale (CAPE-P15) in a Canadian sample. The CAPE-P15, which measures current PLEs in the general population, was completed by 1741 university students. RESULTS: The distribution of total scores was positively skewed, and confirmatory factor analysis indicated that a 3-factor structure produced the best fit. CONCLUSION: The CAPE-P15 has a similar score distribution and consistently measures three types of positive PLEs: persecutory ideation, bizarre experiences and perceptual abnormalities when administered in Canada vs Australia.


Subject(s)
Psychiatric Status Rating Scales/statistics & numerical data , Psychotic Disorders/diagnosis , Adolescent , Adult , Canada , Factor Analysis, Statistical , Female , Humans , Male , Prodromal Symptoms , Risk Factors , Young Adult
5.
Multisens Res ; 31(1-2): 111-144, 2018 Jan 01.
Article in English | MEDLINE | ID: mdl-31264597

ABSTRACT

Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.

6.
J Acoust Soc Am ; 142(2): 838, 2017 08.
Article in English | MEDLINE | ID: mdl-28863596

ABSTRACT

Previous research has shown that speakers can adapt their speech in a flexible manner as a function of a variety of contextual and task factors. While it is known that speech tasks may play a role in speech motor behavior, it remains to be explored if the manner in which the speaking action is initiated can modify low-level, automatic control of vocal motor action. In this study, the nature (linguistic vs non-linguistic) and modality (auditory vs visual) of the go signal (i.e., the prompts) was manipulated in an otherwise identical vocal production task. Participants were instructed to produce the word "head" when prompted, and the auditory feedback they were receiving was altered by systematically changing the first formants of the vowel /ε/ in real time using a custom signal processing system. Linguistic prompts induced greater corrective behaviors to the acoustic perturbations than non-linguistic prompts. This suggests that the accepted variance for the intended speech sound decreases when external linguistic templates are provided to the speaker. Overall, this result shows that the automatic correction of vocal errors is influenced by flexible, context-dependant mechanisms.


Subject(s)
Feedback, Sensory , Linguistics , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Acoustics , Adolescent , Adult , Auditory Threshold , Female , Humans , Male , Photic Stimulation , Signal Processing, Computer-Assisted , Speech Production Measurement , Visual Perception , Young Adult
7.
J Acoust Soc Am ; 141(4): 2758, 2017 04.
Article in English | MEDLINE | ID: mdl-28464659

ABSTRACT

The interaction of language production and perception has been substantiated by empirical studies where speakers compensate their speech articulation in response to the manipulated sound of their voice heard in real-time as auditory feedback. A recent study by Max and Maffett [(2015). Neurosci. Lett. 591, 25-29] reported an absence of compensation (i.e., auditory-motor learning) for frequency-shifted formants when auditory feedback was delayed by 100 ms. In the present study, the effect of auditory feedback delay was studied when only the first formant was manipulated while delaying auditory feedback systematically. In experiment 1, a small yet significant compensation was observed even with 100 ms of auditory delay unlike the past report. This result suggests that the tolerance of feedback delay depends on different types of auditory errors being processed. In experiment 2, it was revealed that the amount of formant compensation had an inverse linear relationship with the amount of auditory delay. One of the speculated mechanisms to account for these results is that as auditory delay increases, undelayed (and unperturbed) somatosensory feedback is given more preference for accuracy control of vowel formants.


Subject(s)
Feedback, Sensory , Learning , Motor Activity , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adolescent , Adult , Auditory Threshold , Female , Humans , Noise/adverse effects , Perceptual Masking , Speech Production Measurement , Time Factors , Young Adult
8.
J Speech Lang Hear Res ; 59(4): 601-15, 2016 08 01.
Article in English | MEDLINE | ID: mdl-27537379

ABSTRACT

PURPOSE: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. METHOD: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. RESULTS: In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. CONCLUSIONS: The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.


Subject(s)
Eye Movements , Lipreading , Speech Perception , Visual Perception , Analysis of Variance , Eye Movement Measurements , Eye Movements/physiology , Female , Humans , Male , Young Adult
9.
Atten Percept Psychophys ; 78(5): 1472-87, 2016 07.
Article in English | MEDLINE | ID: mdl-27150616

ABSTRACT

The basis for individual differences in the degree to which visual speech input enhances comprehension of acoustically degraded speech is largely unknown. Previous research indicates that fine facial detail is not critical for visual enhancement when auditory information is available; however, these studies did not examine individual differences in ability to make use of fine facial detail in relation to audiovisual speech perception ability. Here, we compare participants based on their ability to benefit from visual speech information in the presence of an auditory signal degraded with noise, modulating the resolution of the visual signal through low-pass spatial frequency filtering and monitoring gaze behavior. Participants who benefited most from the addition of visual information (high visual gain) were more adversely affected by the removal of high spatial frequency information, compared to participants with low visual gain, for materials with both poor and rich contextual cues (i.e., words and sentences, respectively). Differences as a function of gaze behavior between participants with the highest and lowest visual gains were observed only for words, with participants with the highest visual gain fixating longer on the mouth region. Our results indicate that the individual variance in audiovisual speech in noise performance can be accounted for, in part, by better use of fine facial detail information extracted from the visual signal and increased fixation on mouth regions for short stimuli. Thus, for some, audiovisual speech perception may suffer when the visual input (in addition to the auditory signal) is less than perfect.


Subject(s)
Acoustic Stimulation/methods , Photic Stimulation/methods , Speech Perception , Visual Perception , Adolescent , Adult , Comprehension , Cues , Female , Fixation, Ocular , Humans , Individuality , Male , Noise , Spatial Processing , Young Adult
10.
J Acoust Soc Am ; 138(1): 413-24, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26233040

ABSTRACT

Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see if "closed vowels" would show less compensatory production than "open vowels" because closed vowels' strong lingual sensation may richly specify production via somatosensory feedback. Results showed that, indeed, speakers exhibited less compensatory production with the closed vowels. Thus sensorimotor control of vowels is not fixed across all vowels; instead it exerts different influences across different vowels.


Subject(s)
Feedback, Sensory/physiology , Phonation/physiology , Phonetics , Speech Acoustics , Adolescent , Adult , Canada , Female , Humans , Language , United States/ethnology , Young Adult
11.
Neuropsychologia ; 75: 402-10, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26100561

ABSTRACT

Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.


Subject(s)
Gestures , Speech Perception , Visual Perception , Acoustic Stimulation , Adult , Cues , Female , Humans , Male , Middle Aged , Noise , Photic Stimulation , Young Adult
12.
J Acoust Soc Am ; 135(5): 2986-94, 2014 May.
Article in English | MEDLINE | ID: mdl-24815278

ABSTRACT

Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise, what was manipulated is spectral information about the filter function of the vocal tract. However, segments can be contrasted by parameters other than spectral configuration. It is possible that the feedback system monitors phonation timing in the way it does spectral information. This study examined whether talkers exhibit a compensatory behavior when manipulating information about voicing. When talkers received feedback of the cognate of the intended voicing category (saying "tipper" while hearing "dipper" or vice versa), they changed the voice onset time and in some cases the following vowel.


Subject(s)
Feedback, Psychological/physiology , Feedback, Sensory/physiology , Perceptual Distortion/physiology , Phonation/physiology , Speech Perception/physiology , Adaptation, Physiological/physiology , Adolescent , Computer Systems , Female , Humans , Motor Skills/physiology , Noise , Psychoacoustics , Speech Production Measurement , Time Factors , Young Adult
13.
J Exp Psychol Hum Percept Perform ; 40(1): 33-9, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24364710

ABSTRACT

An ongoing challenge in scene perception is identifying the factors that influence how we explore our visual world. By using multiple versions of paintings as a tool to control for high-level influences, we show that variation in the visual details of a painting causes differences in observers' gaze despite constant task and content. Further, we show that by switching locations of highly salient regions through textural manipulation, a corresponding switch in eye movement patterns is observed. Our results present the finding that salient regions and gaze behavior are not simply correlated; variation in saliency through textural differences causes an observer to direct their viewing accordingly. This work demonstrates the direct contribution of low-level factors in visual exploration by showing that examination of a scene, even for aesthetic purposes, can be easily manipulated by altering the low-level properties and hence, the saliency of the scene.


Subject(s)
Attention/physiology , Eye Movements/physiology , Paintings/psychology , Visual Perception/physiology , Adult , Eye Movement Measurements/instrumentation , Female , Fixation, Ocular/physiology , Humans , Male , Young Adult
14.
J Acoust Soc Am ; 133(5): 2993-3003, 2013 May.
Article in English | MEDLINE | ID: mdl-23654403

ABSTRACT

The representation of speech goals was explored using an auditory feedback paradigm. When talkers produce vowels the formant structure of which is perturbed in real time, they compensate to preserve the intended goal. When vowel formants are shifted up or down in frequency, participants change the formant frequencies in the opposite direction to the feedback perturbation. In this experiment, the specificity of vowel representation was explored by examining the magnitude of vowel compensation when the second formant frequency of a vowel was perturbed for speakers of two different languages (English and French). Even though the target vowel was the same for both language groups, the pattern of compensation differed. French speakers compensated to smaller perturbations and made larger compensations overall. Moreover, French speakers modified the third formant in their vowels to strengthen the compensation even though the third formant was not perturbed. English speakers did not alter their third formant. Changes in the perceptual goodness ratings by the two groups of participants were consistent with the threshold to initiate vowel compensation in production. These results suggest that vowel goals not only specify the quality of the vowel but also the relationship of the vowel to the vowel space of the spoken language.


Subject(s)
Phonetics , Speech Acoustics , Speech Production Measurement , Voice Quality , Adult , Feedback, Sensory , Female , Humans , Signal Processing, Computer-Assisted , Speech Perception , Time Factors , Young Adult
15.
Psychol Sci ; 24(4): 423-31, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23462756

ABSTRACT

Mounting physiological and behavioral evidence has shown that the detectability of a visual stimulus can be enhanced by a simultaneously presented sound. The mechanisms underlying these cross-sensory effects, however, remain largely unknown. Using continuous flash suppression (CFS), we rendered a complex, dynamic visual stimulus (i.e., a talking face) consciously invisible to participants. We presented the visual stimulus together with a suprathreshold auditory stimulus (i.e., a voice speaking a sentence) that either matched or mismatched the lip movements of the talking face. We compared how long it took for the talking face to overcome interocular suppression and become visible to participants in the matched and mismatched conditions. Our results showed that the detection of the face was facilitated by the presentation of a matching auditory sentence, in comparison with the presentation of a mismatching sentence. This finding indicates that the registration of audiovisual correspondences occurs at an early stage of processing, even when the visual information is blocked from conscious awareness.


Subject(s)
Awareness/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Consciousness/physiology , Female , Humans , Inhibition, Psychological , Male , Photic Stimulation , Signal Detection, Psychological , Young Adult
16.
J Neurosci ; 33(10): 4339-48, 2013 Mar 06.
Article in English | MEDLINE | ID: mdl-23467350

ABSTRACT

The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Brain Mapping , Brain/physiology , Feedback, Sensory/physiology , Speech/physiology , Acoustic Stimulation , Adult , Auditory Pathways/blood supply , Brain/blood supply , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Oxygen/blood , Reaction Time/physiology , Young Adult
17.
Seeing Perceiving ; 25(1): 87-106, 2012.
Article in English | MEDLINE | ID: mdl-22353570

ABSTRACT

Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.


Subject(s)
Auditory Perception/physiology , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Adult , Cognition , Female , Humans , Male , Memory/physiology , Young Adult
18.
Curr Biol ; 22(2): 113-7, 2012 Jan 24.
Article in English | MEDLINE | ID: mdl-22197241

ABSTRACT

Species-specific vocalizations fall into two broad categories: those that emerge during maturation, independent of experience, and those that depend on early life interactions with conspecifics. Human language and the communication systems of a small number of other species, including songbirds, fall into this latter class of vocal learning. Self-monitoring has been assumed to play an important role in the vocal learning of speech and studies demonstrate that perception of your own voice is crucial for both the development and lifelong maintenance of vocalizations in humans and songbirds. Experimental modifications of auditory feedback can also change vocalizations in both humans and songbirds. However, with the exception of large manipulations of timing, no study to date has ever directly examined the use of auditory feedback in speech production under the age of 4. Here we use a real-time formant perturbation task to compare the response of toddlers, children, and adults to altered feedback. Children and adults reacted to this manipulation by changing their vowels in a direction opposite to the perturbation. Surprisingly, toddlers' speech didn't change in response to altered feedback, suggesting that long-held assumptions regarding the role of self-perception in articulatory development need to be reconsidered.


Subject(s)
Child Development , Speech , Adolescent , Auditory Perception , Child , Child, Preschool , Feedback, Sensory , Female , Humans , Young Adult
19.
J Acoust Soc Am ; 130(5): 2978-86, 2011 Nov.
Article in English | MEDLINE | ID: mdl-22087926

ABSTRACT

Past studies have shown that when formants are perturbed in real time, speakers spontaneously compensate for the perturbation by changing their formant frequencies in the opposite direction to the perturbation. Further, the pattern of these results suggests that the processing of auditory feedback error operates at a purely acoustic level. This hypothesis was tested by comparing the response of three language groups to real-time formant perturbations, (1) native English speakers producing an English vowel /ε/, (2) native Japanese speakers producing a Japanese vowel (/e([inverted perpendicular])/), and (3) native Japanese speakers learning English, producing /ε/. All three groups showed similar production patterns when F1 was decreased; however, when F1 was increased, the Japanese groups did not compensate as much as the native English speakers. Due to this asymmetry, the hypothesis that the compensatory production for formant perturbation operates at a purely acoustic level was rejected. Rather, some level of phonological processing influences the feedback processing behavior.


Subject(s)
Feedback, Psychological , Multilingualism , Phonetics , Speech Acoustics , Speech Perception , Adolescent , Analysis of Variance , Female , Humans , Speech Production Measurement , Time Factors , Young Adult
20.
PLoS One ; 6(4): e18655, 2011 Apr 07.
Article in English | MEDLINE | ID: mdl-21490928

ABSTRACT

We describe an illusion in which a stranger's voice, when presented as the auditory concomitant of a participant's own speech, is perceived as a modified version of their own voice. When the congruence between utterance and feedback breaks down, the illusion is also broken. Compared to a baseline condition in which participants heard their own voice as feedback, hearing a stranger's voice induced robust changes in the fundamental frequency (F0) of their production. Moreover, the shift in F0 appears to be feedback dependent, since shift patterns depended reliably on the relationship between the participant's own F0 and the stranger-voice F0. The shift in F0 was evident both when the illusion was present and after it was broken, suggesting that auditory feedback from production may be used separately for self-recognition and for vocal motor control. Our findings indicate that self-recognition of voices, like other body attributes, is malleable and context dependent.


Subject(s)
Speech Perception/physiology , Voice/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Perception/physiology , Female , Humans , Speech Acoustics , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...