Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Rev. logop. foniatr. audiol. (Ed. impr.) ; 29(3): 186-194, sept. 2009. tab, graf
Article in Spanish | IBECS | ID: ibc-61977

ABSTRACT

El implante coclear (IC) mejora la audición pero las habilidades de comunicación y de lectura dependende varios factores. Este estudio evalúa la relaciónentre las habilidades de categorización fonológica yel rendimiento de la lectura silenciosa en niños sordoscon IC. Hemos estudiado la percepción categóricay la precisión de la frontera categórica, dosvariables fonológicas que se analizaron junto conresultados de lectura silenciosa. Comparamos22 niños implantados con 55 niños oyentes utilizandodiferentes factores de edad. Los resultadosmostraron que el desarrollo de la percepción de lasonoridad de los niños IC es similar a la de los niñoscontroles de igual edad audioperceptiva. Esto sugiereun retardo y no un déficit de la categorización de lasonoridad, contrariamente a la obtenida con el lugarde articulación. La percepción de rasgos fonológicosjuega un papel importante en la predicción de resultadosde lectura en los niños implantados. Los niñosimplantados y oyentes presentan competencias similaresen lectura silenciosa a igual edad cronológica.Esto indica que los niños implantados buscan otrasestrategias para compensar su falta de agudeza perceptivade rasgos fonológicos (AU)


Cochlear implant (CI) improves hearing but communicationability still depends on several factors. Thepresent study assesses the relation between phonologicalcategorization ability and silent reading performancein deaf children with cochlear implant.We examine both categorical perception (CP) andboundary precision (BP) performances, two phonologicalvariables that we analyzed together withresults of silent reading. We compared 22 implantedchildren to 55 normal-hearing children using differentage factors. The results showed that the developmentof voicing perception in CI children is similarto that in normal-hearing controls with the sameauditory experience. This suggests a delay and not adeficit of voicing categorization, as opposed toplace. Perception of phonological features plays animportant role in predicting reading results ofimplanted children. Both implanted and normalhearingchildren showed similar reading performanceat the same chronological age. This suggeststhat implanted children seek for other strategies tocompensate for the lack of perceptual acuity ofphonological features (AU)


Subject(s)
Humans , Male , Female , Child , Deafness/surgery , Cochlear Implantation , Reading , Communication , Communication Disorders/etiology , Auditory Perception/classification
2.
Rev. CEFAC ; 8(4): 493-500, out.-dez 2006. graf
Article in Portuguese | LILACS | ID: lil-439826

ABSTRACT

Objetivo: comparar do ponto de vista perceptivo-auditivo as características vocais dos gêneros docanto japonês enka e mudo enka. Métodos: foram selecionadas dez gravações em CD comercialmentedisponíveis de cinco cantores japoneses profissionais que caracterizam o gênero enka e cincodo estilo mudo enka. Foram elaborados um protocolo de avaliação de voz com características vocaisencontradas nos dois estilos da música japonesa. A avaliação foi realizada por três fonoaudiólogasespecialistas na área de voz pelo Conselho Federal de Fonoaudiologia, determinando as característicasvocais mais marcantes em cada gênero a partir da classificação baseada na literatura. Resultados:no estilo enka, o kobushi, o vibrato e crescendos e decrescendos esteve presente em 100% dasamostras vocais. Foi encontrado 80% de metal, 90% de nasalidade e alternância de registro e 70% desoprosidade. No estilo mudo enka, os crescendos e decrescendos esteve presente em 100% dasamostras vocais. Foi encontrado 70% de soprosidade, 90% de vibrato, 50% de alternância de registro,40% de metal e 20% de nasalidade e kobushi. Conclusão: Comparando-se os dois gêneros musicais,enka e mudo enka, identificou-se a presença marcante do Kobushi no gênero enka além domaior predomínio de vibrato, metal, nasalidade, alternância de registros e crescendos e decrescendos.A soprosidade foi encontrada em igual proporção em relação aos dois estilos musicais. A identificaçãodas características vocais é útil ao fonoaudiólogo bem como ao professor de canto no atendimento decantores durante o aprendizado do canto japonês e ou no aperfeiçoamento.


Purpose: to compare from a perceptual perspective the vocal characteristics of enka and mudo enkaJapanese singing genre. Methods: ten recordings, from commercially available CDs, of five professionalJapanese singers who sing the enka genre and five of the mudo enka genre were selected. Weelaborated a voice evaluation protocol with vocal characteristics found in both genres of Japanesemusic. The evaluation was carried out by three speech pathologists, licensed by the Speech PathologyFederal Board, who determined the most outstanding vocal characteristics of each genre, from aliterature-based stratification. Results: in the enka genre, the kobushi, the vibrato and crescendos anddecrescendos were present in 100% of the vocal samples. We found 80% of metal, 90% of nasalityand registration alternation and 70% of soprosity. In the mudo enka genre, the crescendos anddecrescendos were present in 100% of the vocal samples. We found 70% of soprosity, 90% of vibrato,50% of registration alternation, 40% of metal and 20% of nasality and kobushi. Conclusion: Whencomparing the two genres, enka and mudo enka, we verified the strong presence of Kobushi in theenka genre, along with a larger predominance of vibrato, metal, nasality, alternation of registrations andcrescendos and decrescendos. The soprosity was found in equal rates in both genres. The identificationof vocal characteristics is useful for the speech pathologist as well as singing professors when helpingtheir clients (singers) during the learning of the Japanese singing or in its improvement.


Subject(s)
Humans , Adult , Middle Aged , Music , Auditory Perception/physiology , Voice/physiology , Chi-Square Distribution , Japan , Auditory Perception/classification , Speech Perception/physiology , Voice Quality/physiology , Sound Spectrography
3.
Rev. chil. tecnol. méd ; 22(1): 984-992, jul. 2002. tab, graf
Article in Spanish | LILACS | ID: lil-342344

ABSTRACT

La logoaudiometría consiste en medir la discriminación del lenguaje hablado que posee el paciente y es importante para: hacer el topodiagnóstico de las lesiones de la vía auditiva, estimar la dificultad de comunicación del sujeto en la vida diaria, adaptar audífonos y detectar simuladores. Las listas usadas en logoaudiometría deben cumplir los siguientes requisitos: estar constituidas por palabras fonéticamente balanceadas, fonéticamente diferentes, familiares y poseer igual audibilidad. En nuestro país se están usando las listas monosilábicas de Rosenblüt, las palabras disilábicas de Tato y más recientemente las listas de Farfán. Los resultados de la logoaudiometría no son comparables; debido a esto es que el objetivo de este trabajo fue medir la familiaridad y audibilidad de las listas. Para ello se realizaron 40 logoaudiometría a 40 sujetos normo-oyentes(se examinó sólo un oído de cada sujeto), en el Hospital Clínico de la Universidad Dr. José Joaquín Aguirre, en condiciones estandarizadas. Para medir la familiaridad se nombraron jueces que indicaron el grado de familiaridad de las palabras utilizadas en cada lista, la concordancia entre los jueces fue significativa (test de a-Cronbach), las tres listas fueron estadísticamente similares para la familiaridad (test de Kruskal-Wallis), aunque en la lista de Farfán se encontró un menor número de palabras desconocidas. Respecto a la audibilidad las listas de mejor rendimiento fueron las del Dr. Tato y la T.M. Farfán, lo que indica que las mejores palabras son los disílabos


Subject(s)
Humans , Male , Adolescent , Adult , Female , Audiometry, Speech/methods , Auditory Perception/classification , Auditory Perceptual Disorders/diagnosis , Sound Spectrography , Speech Discrimination Tests/methods
5.
O.R.L.-DIPS ; 27(4): 165-167, nov. 2000. tab
Article in Es | IBECS | ID: ibc-5873

ABSTRACT

Una aplicación cada vez más importante de la audiometría de alta frecuencia es la monitorización de los tratamientos considerados potencialmente ototóxicos.Sin embargo, establecer unos umbrales auditivos de alta frecuencia sigue siendo una tarea difícil, principalmente debido a la diversidad de los sistemas empleados y de los métodos de calibración.El objetivo de este estudio ha sido establecer unos umbrales auditivos de alta frecuencia en función de la edad, que puedan servir como parámetros de referencia. Estudiamos a 162 pacientes control que fueron sometidos a una audiometría tonal liminar de alta frecuencia, calculando los umbrales para las frecuencias comprendidas entre los 10 y 20 KHz.Los resultados se presentan en relación a grupos de edad, y sugieren, como otros autores, que los umbrales auditivos aumentan con la edad y con la frecuencia. Destacamos la contaminación medioambiental y la presbiacusia dentro del grupo de los principales factores etiológicos que justificarían este agravamiento progresivo de la capacidad auditiva con el tiempo. (AU)


Subject(s)
Adolescent , Adult , Female , Male , Middle Aged , Humans , Audiometry/methods , Audiometry , Environmental Pollution/adverse effects , Presbycusis/complications , Presbycusis/diagnosis , Presbycusis/epidemiology , Calibration/standards , Auditory Threshold/physiology , Auditory Threshold/classification , Ear Canal , Hearing Disorders/diagnosis , Auditory Perception/classification , Hearing Loss, High-Frequency/physiopathology , Hearing Loss, High-Frequency/prevention & control , Radio Waves , Eustachian Tube/pathology , Eustachian Tube , Eustachian Tube
6.
Proc Inst Mech Eng H ; 214(1): 121-8, 2000.
Article in English | MEDLINE | ID: mdl-10718056

ABSTRACT

Closing clicks from mechanical heart valve prostheses are transmitted to the patient's inner ear mainly in two different ways: as acoustically transmitted sound waves, and as vibrations transmitted through bones and vessels. The purpose of this study was to develop a method for quantifying what patients perceive as sound from their mechanical heart valve prostheses via these two routes. In this study, 34 patients with implanted mechanical bileaflet aortic and mitral valves (St Jude Medical and On-X) were included. Measurements were performed in a specially designed sound insulated chamber equipped with microphones, accelerometers, preamplifiers and a loudspeaker. The closing sounds measured with an accelerometer on the patient's chest were delayed 400 ms, amplified and played back to the patient through the loudspeaker. The patient adjusted the feedback sound to the same level as the 'real-time' clicks he or she perceived directly from his or her valve. In this way the feedback sound energy includes both the air- and the bone-transmitted energies. Sound pressure levels (SPLs) were quantified both in dB(A) and in the loudness unit sone according to ISO 532B (the Zwicker method). The mean air-transmitted SPL measured close to the patient's ear was 23 +/- 4 dB(A). The mean air- and bone-transmitted sounds and vibrations were perceived by the patients as an SPL of 34 +/- 5 dB(A). There was no statistically significant difference in the perceived sound from the two investigated bileaflet valves, and no difference between aortic and mitral valves. The study showed that the presented feedback method is capable of quantifying the perceived sounds and vibrations from mechanical heart valves, if the patient's hearing is not too impaired. Patients with implanted mechanical heart valve prostheses seem to perceive the sound from their valve two to three times higher than nearby persons, because of the additional bone-transmitted vibrations.


Subject(s)
Auditory Perception/classification , Heart Valve Prosthesis/adverse effects , Noise/prevention & control , Acoustics , Aortic Valve , Bone Conduction/physiology , Feedback/physiology , Fourier Analysis , Hearing Disorders/diagnosis , Hearing Disorders/physiopathology , Hearing Tests , Humans , Mitral Valve , Prosthesis Design , Vibration
7.
Psiquiatr. biol. (Ed. impr.) ; 7(1): 44-48, ene. 2000.
Article in Es | IBECS | ID: ibc-11714

ABSTRACT

Las alucinaciones musicales son infrecuentes y se encuentran en la encrucijada de la práctica otológica, neurológica y psiquiátrica. No se dispone de suficiente información empírica como para conocer su función diagnóstica. Se presentan dos nuevos casos de alucinaciones musicales. El primero en el contexto de una psicosis de inicio tardío. En el segundo las alucinaciones aparecen de forma aguda en un paciente con una sordera bilateral antigua. Las alucinaciones musicales se han descrito en distintas situaciones clínicas: pérdida de la capacidad auditiva; lesiones cerebrales, procesos vasculares y encefalitis; consumo de sustancias psicoactivas y trastornos psiquiátricos. Dependiendo de la etiología la experiencia alucinatoria puede variar en la forma de comienzo, la familiaridad de lo escuchado, el tipo y el género musical, el origen de lo percibido, la localización, la presentación como único síntoma o acompañado de otra alteración de la sensopercepción u otra sintomatología psiquiátrica, la vivencia y el grado de insight. Las alucinaciones musicales son un fenómeno infrecuente y complejo. Clínicamente es posible que sean más frecuentes en la mujer y en la vejez. La etiología parece estar ligada a la sordera y otras enfermedades del oído, y a las lesiones cerebrales que afecten, predominantemente, al hemisferio no dominante. No parece probable que factores como la psicosis o los rasgos de personalidad influyan en el desarrollo de la mayoría de las alucinaciones musicales. En este trabajo, se presentan dos nuevos casos que vienen a confirmar la complejidad y riqueza de las alucinaciones musicales Alucinaciones musicales (AU)


Subject(s)
Adult , Female , Male , Middle Aged , Humans , Hallucinations/complications , Hallucinations/diagnosis , Hallucinations/history , Music/history , Music/psychology , Auditory Perception/classification , Auditory Perception/physiology , Ear Diseases/complications , Hallucinations/epidemiology , Hallucinations/physiopathology
8.
Percept Psychophys ; 60(7): 1141-52, 1998 Oct.
Article in English | MEDLINE | ID: mdl-9821776

ABSTRACT

The present experiments were designed to determine whether memory for the voice in which a word is spoken is retained in a memory system that is separate from episodic memory or, instead, whether episodic memory represents both word and voice information. These two positions were evaluated by assessing the effects of study-to-test changes in voice on recognition memory after a variety of encoding tasks that varied in processing requirements. In three experiments, the subjects studied a list of words produced by six voices. The voice in which the word was spoken during a subsequent explicit recognition test was either the same as or different from the voice used in the study phase. The results showed that word recognition was affected by changes in voice after each encoding condition and that the magnitude of the voice effect was unaffected by the type of encoding task. The results suggest that spoken words are represented in long-term memory as episodic traces that contain talker-specific perceptual information.


Subject(s)
Auditory Perception/classification , Memory/classification , Psycholinguistics , Factor Analysis, Statistical , Female , Humans , Male , Phonation , Phonetics , Psychomotor Performance , Reaction Time , Reference Values , Voice , Word Association Tests
9.
Can J Exp Psychol ; 51(4): 354-68, 1997 Dec.
Article in French | MEDLINE | ID: mdl-9687196

ABSTRACT

We present the neuropsychological study of a patient, I.R., who sustained bilateral damage to the temporal lobes and to the right frontal lobe as a result of successive brain surgeries that occurred ten years earlier. The patient is 40 years old and right-handed; she had no special training in music or in language, representing, therefore, the large majority of listeners. Her performance is compared to that of four neurologically intact subjects who are closely matched in terms of education, sex and age. In the present study, we report I.R.'s performance on various tests aiming at assessing her general cognitive functioning with a particular focus on auditory aspects. The results show that, despite extensive damage to her auditory cortex, I.R.'s speech abilities are essentially intact (see Tables 1 and 2). The only impairments that are detected in the language domain are related to a short-term memory deficit, to some abnormal sensitivity to retroactive interference in long-term memory (see Table 3) and to articulation. These difficulties do not, however, affect linguistic communication, which is obviously undisturbed I.R. is not aphasic). Similarly, I.R. does not experience any difficulty in the recognition and memorization of familiar sounds such as animal cries, traffic noises and the like (see Tables 5 and 7). In contrast, I.R. is severely impaired in most musical abilities: She can no longer discriminate nor identify melodies that were once highly familiar to her; she can no longer discriminate nor memorize novel melodies (see Table 4). Her pattern of musical losses is compatible with a basic and severe perceptual deficit that compromises access to and registration in memory systems. The observation that the auditory impairment affects music and spares language and environmental sounds refers to a neuropsychological condition that is known as music agnosia. I.R. represents, to our knowledge, the fourth case of music agnosia available in the literature (Peretz et al., 1994; Griffiths et al., 1997). The existence of such cases suggests that music processing is not mediated by a general-purpose auditory architecture but by specialized cortical subsystems. Not only does I.R. suffer from music agnosia, but she is also impaired in the discrimination and recognition of musical instruments and of human voices (see Table 5). These latter two deficits probably do not result from the music agnosic condition. Rather, they seem to reflect damage to adjacent brain areas that are specialized in timbre processing (see Peretz. et al., 1994, for the relevant discussion). It is also worth mentioning that I.R. appears to be impaired in musical expressive abilities as well: I.R. can no longer sing a single note. Thus, her losses are rather general in the musical domain, hence justifying the classification of her case as amusia. Cases of amusia without aphasia are relatively frequent in the neuropsychological literature. However, all of these reported cases are anecdotal. Thus, in the present study, special focus is given to the measurement and direct comparison of performance in the language and music domain; in both domains, task characteristics and materials were as similar as possible. To this aim, the lyrics and the tune of the same popular song excerpts were used. The musical and the spoken parts were presented separately in a primed familiarity decision task and in a memory recognition task. In both situations, I.R. performs at or close to chance when she has to deal with music, whereas she recognizes easily and performs normally on the spoken material (see Tables 6 and 7). These results clearly argue for the autonomy of music and language in the processing of auditory information.


Subject(s)
Agnosia , Auditory Perception , Cognition Disorders/classification , Music/psychology , Perceptual Disorders , Temporal Lobe , Adult , Agnosia/classification , Agnosia/etiology , Aphasia/etiology , Auditory Perception/classification , Auditory Perception/physiology , Cerebral Decortication/adverse effects , Cognition Disorders/etiology , Female , Humans , Neuropsychological Tests , Perceptual Disorders/classification , Perceptual Disorders/etiology , Speech Perception/physiology , Temporal Lobe/physiopathology , Temporal Lobe/surgery
10.
Acta otorrinolaringol ; 8(2): 59-65, oct. 1996. ilus
Article in Spanish | LILACS | ID: lil-193575

ABSTRACT

Por mucho tiempo se ha tratado de utilizar métodos diagnósticos más objetivos para determinar el funcionamiento de aparatos y sintomas, los estudios audiológicos como impedanciomatría y potenciales evocados marcaron una pauta importante para la exploración auditiva, y se convirtieron en los primeros métodos objetivos de la función de la vía auditiva mecánica, sensorial y neural. Los sonidos de baja intensidad o emisiones otoacústicas producidas por el oído humano se han observado en oídos normales y estas mismas no están presentes en oídos con hipoacusia. Esto permite contar con este método de estudio que nos permite saber la integridad de las células ciliadas externas en el órgano de Corti. El uso de emisiones otoacústicas evocadas, entre ellas las transitorias y los productos de distorsión han tenido auge en el área de detección temprana de audición, seguimiento en individuos de riesgo para trauma acústico laboral y no laboral, en pacientes que deben ser sometidos a terapéutica con drogas atotóxicas, en el topodiagnóstico de lesiones colcleares y retrococleares, y más recientemente en la monitorización de la función coclear en la cirugúa neuro-otológica que desee preservar la audición del paciente entre otros.


Subject(s)
Infant, Newborn , Infant , Child, Preschool , Child , Adolescent , Adult , Humans , Male , Female , Audiometry/methods , Diagnosis , Otoacoustic Emissions, Spontaneous/genetics , Auditory Perception/classification , Audiology/genetics , Otorhinolaryngologic Surgical Procedures
11.
Laryngoscope ; 105(4 Pt 1): 349-53, 1995 Apr.
Article in English | MEDLINE | ID: mdl-7715375

ABSTRACT

Distortion product emission (DPE) growth functions, demographic data, and pure tone thresholds (PTTs) were recorded in 229 normal-hearing and hearing-impaired ears. Half of the data set (115 ears) was used by a discriminant analysis routine to classify DPE and demographic features into either a normal PTT group or an impaired PTT group (PTT greater than 30 dB SPL [sound pressure level]) at six frequencies in the audiometric range. The six discriminant functions developed from this classification process were then used to predict PTT group membership in the remaining 114-ear data set. Frequency-specific prediction accuracy was approximately 85% overall. Of the 45 DPE and demographic variables evaluated, the DPE amplitude associated with an f2 (a primary tone of frequency) of moderate level (50 dB SPL) and a frequency corresponding to PTT was generally most predictive. DPE features associated with frequencies immediately adjacent to the PTT frequency also appear to be useful. DPE level was found to be weakly correlated with subject age; perhaps for this reason, age was frequently included in discriminant functions. This study describes the DPE measures that can most reliably categorize PTTs as normal or impaired in large populations with varied cochlear hearing status.


Subject(s)
Auditory Perception/physiology , Auditory Threshold/physiology , Cochlea/physiology , Evoked Potentials, Auditory/physiology , Age Factors , Audiometry, Pure-Tone , Auditory Perception/classification , Auditory Threshold/classification , Discriminant Analysis , Feedback , Female , Forecasting , Hearing/physiology , Hearing Disorders/physiopathology , Humans , Male , Multivariate Analysis , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...