Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
J Acoust Soc Am ; 150(3): 1830, 2021 09.
Article in English | MEDLINE | ID: mdl-34598614

ABSTRACT

To clarify the acoustic variables for predicting and classifying Japanese singleton and geminate consonants, raw and logarithmic durations of the consonants and their related segments were examined using 12 minimal pair words that were pronounced in a carrier sentence at various speaking rates by 20 native Japanese speakers. Regression and discriminant analyses revealed that the logarithmic durations were better at predicting and classifying Japanese singleton and geminate consonants than the raw durations used in many previous studies. Specifically, the best acoustic variables were the logarithmic duration of the consonant's closure or frication and the logarithmic average duration of the mora in the preceding carrier phrase. These results suggest that logarithmic durations are relational invariant acoustic variables that can cope with the durational variations of singleton and geminate consonants in a wide range of speaking rates.


Subject(s)
Phonetics , Speech Acoustics , Acoustics , Japan , Language
2.
Hong Kong J Occup Ther ; 27(1): 35-41, 2016 Jun.
Article in English | MEDLINE | ID: mdl-30186059

ABSTRACT

OBJECTIVE/BACKGROUND: The prevalence of depression in women is two times as much as that in men. However, the rehabilitation programme for return to work for patients with depression in Japan mainly focuses on male individuals. Japanese working women usually have the central role in housework in addition to paid work. Therefore, we hypothesized that Japanese working women with depression need a support programme for housework as well as paid work. The purpose of this study was to investigate the stress factors relevant to the existence of depression, in both paid work and housework, among working women. METHODS: This study recruited 35 women with depression and 35 women without depression. We carried out a cross-sectional investigation with two questionnaires having the same structure: The National Institute for Occupational Safety and Health (NIOSH) Generic Job Stress Questionnaire (for paid work) and the NIOSH Generic Housekeeping Labor Stress Questionnaire (for housework). We extracted the stress factors contributing to the existence of depression using logistic regression. RESULTS: Three stress factors were found--two in housework, and one in paid work. In housework, variance in workload and underutilization of abilities were associated with the presence of depression. In paid work, interpersonal conflict was an associated factor. CONCLUSION: Rehabilitation programmes involving variance in workload and under self-evaluation in housework, and interpersonal conflict in paid work must be adequately addressed to support working women with depression.

3.
Phonetica ; 72(1): 43-60, 2015.
Article in English | MEDLINE | ID: mdl-26226989

ABSTRACT

The theory of relational acoustic invariance claims that there are stable acoustic properties in speech signals that correspond to a phonological feature, and that the perception system utilizes these acoustic properties for stable perception of a phoneme. The present study examines whether such an invariance exists in native listeners' perception of Japanese singleton and geminate stops despite variability in speaking rate and word length, and whether this perception corresponds to production. Native Japanese listeners identified singleton and geminate stops in continua of 3- and 4-mora words spoken at different speaking rates. Results indicated that the perception boundary is well predicted by a linear function with two variables: durations of stop closure and the (C)V(C)CV portion (with the contrasting stops underlined) of the 3- and 4-mora words. In addition, these two variables were in a consistent relationship for both perception and production of words containing 2-4 moras. The results support the relational acoustic invariance theory.


Subject(s)
Acoustics , Phonetics , Speech Acoustics , Speech Perception , Adolescent , Asian People , Female , Humans , Japan , Language , Male , Sound Spectrography , Speech Production Measurement
4.
J Acoust Soc Am ; 132(3): 1614-25, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22978890

ABSTRACT

This study examined the durational structure of single and geminate stop distinction produced in three- and four-mora words of Japanese, (C(1))V(1)(C(2))C(2)V(2)X [(C(2))C(2) = the contrasting consonants; X = a CV mora, the moraic nasal, or a long vowel as part of V(2)]. The questions addressed were how factors such as speaking rate, segmental variability, and moraic composition of words affected the stop quantity distinction in words longer than well-studied disyllabic words, and whether there exists an invariant parameter that classified these two stop categories. Results showed that all of those factors systematically affected the duration of the contrasting stop closure, the unit of [(C(1))V(1)(C(2))C(2)V(2)], and the entire three- and four-mora words. However, the durational units of moras and words were well-structured, and the ratio of the contrasting stop closure to the [(C(1))V(1)(C(2))C(2)V(2)] unit, as well as the ratio of the closure to the entire word, were found to be invariant in indicating the stop quantity distinction. These results support the theory of relational acoustic invariance [Pickett et al., Phonetica 56, 135-157 (1999)] on the part of production. Furthermore, the results provide insight into different versions of Japanese mora hypothesis [Han, The Study of Sounds 10, 65-80 (1962); Port et al., J. Acoust. Soc. Am. 81(5), 1574-1585 (1987)], which have been under debate for five decades.


Subject(s)
Language , Phonetics , Speech Acoustics , Voice Quality , Acoustics , Analysis of Variance , Discriminant Analysis , Female , Humans , Male , Sound Spectrography , Speech Production Measurement , Time Factors
5.
Psicológica (Valencia, Ed. impr.) ; 33(2): 175-207, 2012. ilus, tab
Article in English | IBECS | ID: ibc-100387

ABSTRACT

Utilizando un lenguaje artificial Maye, Werker, y Gerken (2002) demostraron que las categorías fonéticas de los bebés cambian en función de la distribución de los sonidos del habla. En un estudio reciente, Werker y cols. (2007) observaron que el habla dirigida a bebés (habla maternal) contiene claves acústicas fiables que sustentan el aprendizaje de las categorías vocálicas propias de la lengua: las pistas en inglés eran espectrales y de duración; las pistas en japonés eran exclusivamente de duración. En el presente estudio se amplían estos resultados de dos formas, 1) examinamos una nueva lengua, el catalán, que distingue las vocales únicamente a partir de las diferencias espectrales, y 2) ya que los bebés aprenden también del habla dirigida a los adultos (Oshima-Takane, 1988), analizamos este tipo de habla en las tres lenguas. Los análisis revelaron diferencias considerables en las pistas de cada lengua, e indicaron que, por sí solas, son suficientes para establecer las categorías vocálicas específicas de cada lengua. Esta demostración de las diferencias propias de cada lengua en la distribución de las pistas fonéticas presentes en el habla dirigida al adulto, proporciona evidencia adicional sobre el tipo de pistas que pueden estar usando los bebés cuando establecen sus categorías fonéticas maternas(AU)


Using an artificial language learning manipulation, Maye, Werker, and Gerken (2002) demonstrated that infants’ speech sound categories change as a function of the distributional properties of the input. In a recent study, Werker et al. (2007) showed that Infant-directed Speech (IDS) input contains reliable acoustic cues that support distributional learning of language-specific vowel categories: English cues are spectral and durational; Japanese cues are exclusively durational. In the present study we extend these results in two ways. 1) we examine a language, Catalan, which distinguishes vowels solely on the basis of spectral differences, and 2) because infants learn from overheard adult speech as well as IDS (Oshima- Takane, 1988), we analyze Adult-directed Speech (ADS) in all three languages. Analyses revealed robust differences in the cues of each language, and demonstrated that these cues alone are sufficient to yield language-specific vowel categories. This demonstration of language-specific differences in the distribution of cues to phonetic category structure found in ADS provides additional evidence for the types of cues available to infants to guide their establishment of native phonetic categories(AU)


Subject(s)
Humans , Male , Female , Young Adult , Adult , Phonetics , Articulation Disorders/psychology , Lipreading , Speech/physiology , Evidence-Based Medicine/methods , Acoustic Impedance Tests/methods , Acoustic Impedance Tests , Acoustic Stimulation/psychology , Psychoacoustics , Analysis of Variance , Odds Ratio , Probability , Verbal Behavior/physiology
6.
J Acoust Soc Am ; 128(4): 2049-58, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20968375

ABSTRACT

The theory of relational acoustic invariance [Pickett, E. R., et al. (1999). Phonetica 56, 135-157] was tested with the Japanese stop quantity distinction in disyllables spoken at various rates. The questions were whether the perceptual boundary between the two phonemic categories of single and geminate stops is invariant across rates, and whether there is a close correspondence between the perception and production boundaries. The durational ratio of stop closure to word (where the "word" was defined as disyllables) was previously found to be an invariant parameter that classified the two categories in production, but the present study found that this ratio varied with different speaking rates in perception. However, regression and discriminant analyses of perception and production data showed that treating stop closure as a function of word duration with an intercept term represented the perception and production boundaries very well. This result indicated that the durational ratio of adjusted stop closure (i.e., closure with an added constant) to the word was invariant and distinguished the two phonemic categories clearly. Taken together, the results support the relational acoustic invariance theory, and help refine the theory with regard to exactly what form 'invariance' can take.


Subject(s)
Phonetics , Speech Acoustics , Speech Perception , Acoustic Stimulation , Adult , Audiometry , Discriminant Analysis , Female , Humans , Japan , Male , Regression Analysis , Time Factors , Young Adult
7.
J Child Lang ; 37(2): 319-40, 2010 Mar.
Article in English | MEDLINE | ID: mdl-19490747

ABSTRACT

In this work, we examine a context in which a conflict arises between two roles that infant-directed speech (IDS) plays: making language structure salient and modeling the adult form of a language. Vowel devoicing in fluent adult Japanese creates violations of the canonical Japanese consonant-vowel word structure pattern by systematically devoicing particular vowels, yielding surface consonant clusters. We measured vowel devoicing rates in a corpus of infant- and adult-directed Japanese speech, for both read and spontaneous speech, and found that the mothers in our study preserve the fluent adult form of the language and mask underlying phonological structure by devoicing vowels in infant-directed speech at virtually the same rates as those for adult-directed speech. The results highlight the complex interrelationships among the modifications to adult speech that comprise infant-directed speech, and that form the input from which infants begin to build the eventual mature form of their native language.


Subject(s)
Mother-Child Relations , Phonetics , Speech , Adult , Female , Humans , Infant , Language , Linguistics , Male , Mothers , Psycholinguistics , Reading , Speech Acoustics , Speech Production Measurement
8.
Dev Psychol ; 45(1): 236-47, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19210005

ABSTRACT

This study investigated vowel length discrimination in infants from 2 language backgrounds, Japanese and English, in which vowel length is either phonemic or nonphonemic. Experiment 1 revealed that English 18-month-olds discriminate short and long vowels although vowel length is not phonemically contrastive in English. Experiments 2 and 3 revealed that Japanese 18-month-olds also discriminate the pairs but in an asymmetric manner: They detected only the change from long to short vowel, but not the change in the opposite direction, although English infants in Experiment 1 detected the change in both directions. Experiment 4 tested Japanese 10-month-olds and revealed a symmetric pattern of discrimination similar to that of English 18-month-olds. Experiment 5 revealed that native adult Japanese speakers, unlike Japanese 18-month-old infants who are presumably still developing phonological perception, ultimately acquire a symmetrical discrimination pattern for the vowel contrasts. Taken together, our findings suggest that English 18-month-olds and Japanese 10-month-olds perceive vowel length using simple acoustic?phonetic cues, whereas Japanese 18-month-olds perceive it under the influence of the emerging native phonology, which leads to a transient asymmetric pattern in perception.


Subject(s)
Language , Multilingualism , Speech Perception , Verbal Learning , Acoustic Stimulation/methods , Analysis of Variance , Asian People , Female , Habituation, Psychophysiologic/physiology , Humans , Infant , Logistic Models , Male , Sound Spectrography , Speech Discrimination Tests , Time Factors
9.
Infancy ; 14(4): 488-499, 2009 Jul 08.
Article in English | MEDLINE | ID: mdl-32693450

ABSTRACT

Six-, 12-, and 18-month-old English-hearing infants were tested on their ability to discriminate nonword forms ending in the final stop consonants /k/ and /t/ from their counterparts with final /s/ added, resulting in final clusters /ks/ and /ts/, in a habituation-dishabituation, looking time paradigm. Infants at all 3 ages demonstrated an ability to discriminate this type of contrast, a contrast that constitutes one phonetic cue for the English morphological concepts of plural, possession, and person. These results suggest that across a significant portion of the development of infants' speech perception, this type of final contrast is discriminable.

10.
J Acoust Soc Am ; 122(3): 1332, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17927395

ABSTRACT

Japanese infants at the ages of 6, 12, and 18 months were tested on their ability to discriminate three nonsense words with different phonotactic status: canonical keetsu, noncanonical but possible keets, and noncanonical and impossible keet. The results showed that 12 and 18 months olds discriminate the keets/keetsu pair, but infants in all age groups fail to discriminate the keets/keet pair. Taken together with the findings in our previous study [Kajikawa et al., J. Acoust. Soc. Am. 120(4), 2278-2284 (2006)], these results suggest that Japanese infants develop the perceptual sensitivity for native phonotactics after 6 months of age, and that this sensitivity is limited to canonical patterns at this early developmental stage.


Subject(s)
Aging/physiology , Auditory Perception/physiology , Discrimination, Psychological , Language , Speech , Female , Habituation, Psychophysiologic , Humans , Infant , Japan , Male , Patient Selection
11.
Proc Natl Acad Sci U S A ; 104(33): 13273-8, 2007 Aug 14.
Article in English | MEDLINE | ID: mdl-17664424

ABSTRACT

Infants rapidly learn the sound categories of their native language, even though they do not receive explicit or focused training. Recent research suggests that this learning is due to infants' sensitivity to the distribution of speech sounds and that infant-directed speech contains the distributional information needed to form native-language vowel categories. An algorithm, based on Expectation-Maximization, is presented here for learning the categories from a sequence of vowel tokens without (i) receiving any category information with each vowel token, (ii) knowing in advance the number of categories to learn, or (iii) having access to the entire data ensemble. When exposed to vowel tokens drawn from either English or Japanese infant-directed speech, the algorithm successfully discovered the language-specific vowel categories (/I, i, epsilon, e/ for English, /I, i, e, e/ for Japanese). A nonparametric version of the algorithm, closely related to neural network models based on topographic representation and competitive Hebbian learning, also was able to discover the vowel categories, albeit somewhat less reliably. These results reinforce the proposal that native-language speech categories are acquired through distributional learning and that such learning may be instantiated in a biologically plausible manner.


Subject(s)
Learning , Speech , Algorithms , Humans , Infant
12.
J Acoust Soc Am ; 121(4): 2272-82, 2007 Apr.
Article in English | MEDLINE | ID: mdl-17471741

ABSTRACT

This paper describes a longitudinal analysis of the vowel development of two Japanese infants in terms of spectral resonant peaks. This study aims to investigate when and how the two infants become able to produce categorically separated vowels, and covers the ages of 4 to 60 months in order to provide detailed findings on the developmental process of speech production. The two lower spectral peaks were estimated from vowels extracted from natural spontaneous speech produced by the infants. Phoneme labeled and transcription-independent unlabeled data analyses were conducted. The labeled data analysis revealed longitudinal trends in the developmental change, which correspond to the articulation positions of the tongue and the rapid enlargement of the articulatory organs. In addition, the distribution of the two spectral peaks demonstrates the vowel space expansion that occurs with age. An unlabeled data analysis technique derived from the linear discriminant analysis method was introduced to measure the vowel space expansion quantitatively. It revealed that the infant's vowel space becomes similar to that of an adult in the early stages. In terms of both labeled and unlabeled properties, these results suggested that infants become capable of producing categorically separated vowels by 24 months.


Subject(s)
Asian People , Language , Phonetics , Speech/physiology , Age Factors , Follow-Up Studies , Humans , Infant , Models, Biological , Speech Production Measurement
13.
Cognition ; 103(1): 147-62, 2007 Apr.
Article in English | MEDLINE | ID: mdl-16707119

ABSTRACT

Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language [Werker, J. F., & Tees, R. C. (1984). Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7, 49-63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.


Subject(s)
Phonetics , Speech Perception , Verbal Behavior , Verbal Learning , Canada , Humans , Infant , Japan
14.
J Acoust Soc Am ; 120(4): 2278-84, 2006 Oct.
Article in English | MEDLINE | ID: mdl-17069323

ABSTRACT

This study explored sensitivity to word-level phonotactic patterns in English and Japanese monolingual infants. Infants at the ages of 6, 12, and 18 months were tested on their ability to discriminate between test words using a habituation-switch experimental paradigm. All of the test words, neek, neeks, and neekusu, are phonotactically legitimate for English, whereas the first two words are critically noncanonical in Japanese. The language-specific phonotactical congruence influenced infants' performance in discrimination. English-learning infants could discriminate between neek and neeks at the age of 18 months, but Japanese infants could not. There was a similar developmental pattern for infants of both language groups for discrimination of neek and neeks, but Japanese infants showed a different trajectory from English infants for neekusu/neeks. These differences reflect the different status of these word patterns with respect to the phonotactics of both languages, and reveal early sensitivity to subtle phonotactic and language input patterns in each language.


Subject(s)
Language , Speech Perception , Verbal Behavior , Acoustic Stimulation , Female , Humans , Infant , Male
15.
J Acoust Soc Am ; 119(3): 1636-47, 2006 Mar.
Article in English | MEDLINE | ID: mdl-16583908

ABSTRACT

The fundamental frequencies (F0) of daily life utterances of Japanese infants and their parents from the infant's birth until about 5 years of age were longitudinally analyzed. The analysis revealed that an infant's F0 mean decreases as a function of month of age. It also showed that within- and between-utterance variability in infant F0 is different before and after the onset of two-word utterances, probably reflecting the difference between linguistic and nonlinguistic utterances. Parents' F0 mean is high in infant-directed speech (IDS) before the onset of two-word utterances, but it gradually decreases and reaches almost the same value as in adult-directed speech after the onset of two-word utterances. The between-utterance variability of parents' F0 in IDS is large before the onset of two-word utterances and it subsequently becomes smaller. It is suggested that these changes of parents' F0 are closely related to the feasibility of communication between infants and parents.


Subject(s)
Child Language , Linguistics , Parent-Child Relations , Speech Acoustics , Adult , Child, Preschool , Communication , Female , Humans , Infant , Infant, Newborn , Longitudinal Studies , Male
16.
Lang Speech ; 48(Pt 2): 185-201, 2005.
Article in English | MEDLINE | ID: mdl-16411504

ABSTRACT

The canonical form for Japanese words is (Consonant)Vowel(Consonant) Vowel-. However, a regular process of high vowel devoicing between voiceless consonants and word-finally after voiceless consonants results in consonant clusters and word-final consonants, apparent violations of that phonotactic pattern. We investigated Japanese adults' perceptions of these violations, asking them to rate both canonical and noncanonical nonsense forms on a scale of goodness. Results indicate that adults show evidence of being guided in making their judgments by an implicit understanding of both typical canonical forms and appropriate contexts for vowel devoicing.


Subject(s)
Phonetics , Speech Perception , Adult , Female , Humans , Japan , Language Tests , Male
17.
J Child Lang ; 31(1): 215-30, 2004 Feb.
Article in English | MEDLINE | ID: mdl-15053091

ABSTRACT

This study aimed to clarify the development of conversational style in Japanese mother-child interactions. We focused on the frequency of speech overlap as an index of Japanese conversational style, with particular attention to ne, a particle produced by the speaker, and to backchannels, such as 'uh-huh', produced by the listener that support sympathetic conversation. The results of longitudinal observations of two Japanese mother-child dyads from approximately 0;11 to 3;3 suggest that an adultlike conversational style with frequent overlaps emerges in Japanese child-directed speech around the two-word utterance period, and a child's development of ne use is closely related to this shift.


Subject(s)
Communication , Mother-Child Relations , Child Language , Child, Preschool , Female , Humans , Infant , Language Development , Longitudinal Studies , Verbal Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...