Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
J Environ Radioact ; 217: 106193, 2020 Jun.
Article in English | MEDLINE | ID: mdl-32217253

ABSTRACT

Radionuclides released into the atmosphere following the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident were detected by ground-based monitoring stations worldwide. The inter-continental dispersion of radionuclides provides a unique opportunity to evaluate the ability of atmospheric dispersion models to represent the processes controlling their transport and deposition in the atmosphere. Co-located measurements of radioxenon (133Xe) and caesium (137Cs) concentrations enable individual physical processes (dispersion, dry and wet deposition) to be isolated. In this paper we focus on errors in the prediction of 137Cs attributed to the representation of particle size and solubility, in the process of modelling wet deposition. Simulations of 133Xe and 137Cs concentrations using the UK Met Office NAME (Numerical Atmospheric-dispersion Modelling Environment) model are compared with CTBTO (Comprehensive Nuclear-Test-Ban Treaty Organisation) surface station measurements. NAME predictions of 137Cs using a bulk wet deposition parameterisation (which does not account for particle size dependent scavenging or solubility) significantly underestimate observed 137Cs. When a binned wet deposition parameterisation is implemented (which accounts for particle size dependent scavenging) the correlations between modelled and observed air concentrations improve at all 9 of the Northern Hemisphere sites studied and the respective RMSLE (root-mean-square-log-error) decreases by a factor of 7 due to a decrease in the wet-deposition of Aitken and Accumulation mode particles. Finally, NAME simulations were performed in which insoluble submicron particles are represented. Representing insoluble particles in the NAME simulations improves the RMSLE at all sites further by a factor of 7. Thus NAME is able to predict 137Cs with good accuracy (within a factor of 10 of observed 137Cs values) at distances greater than 10,000 km from FDNPP only if insoluble submicron particles are considered in the description of the source. This result provides further evidence of the presence of insoluble Cs-rich microparticles in the release following the accident at FDNPP and suggests that these small particles travelled across the Pacific Ocean to the US and further across the North Atlantic Ocean towards Europe.


Subject(s)
Fukushima Nuclear Accident , Air Pollutants, Radioactive , Atlantic Ocean , Cesium Radioisotopes , Japan , Pacific Ocean , Particle Size , Radiation Monitoring , Solubility , Water Pollutants, Radioactive
2.
J Laryngol Otol ; 128(2): 169-70, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24495455

ABSTRACT

BACKGROUND: It is common for ENT specialists to be called to neonatal intensive care units to assess neonates with suspected laryngomalacia. At Addenbrooke's Hospital, Cambridge, UK, it is standard practice to initially try to assess the larynx whilst the patient is awake. This can cause the patient to cry and become irritable, and can induce worry in the parents. A literature search revealed that numerous procedures have been successfully performed on neonates and infants whilst they were being pacified. OBJECTIVES: This paper describes various procedures where pacification has been used effectively. Furthermore, it reports a pacification technique developed for per-oral flexible laryngoscopy in awake neonates and infants.


Subject(s)
Laryngoscopy/methods , Pacifiers , Humans , Infant , Infant, Newborn , Laryngeal Diseases/diagnosis , Laryngoscopy/instrumentation
3.
Article in English | MEDLINE | ID: mdl-24192705

ABSTRACT

Mask-based objective speech-intelligibility measures have been successfully proposed for evaluating the performance of binary masking algorithms. These objective measures were computed directly by comparing the estimated binary mask against the ground truth ideal binary mask (IdBM). Most of these objective measures, however, assign equal weight to all time-frequency (T-F) units. In this study, we propose to improve the existing mask-based objective measures by weighting each T-F unit according to its target or masker loudness. The proposed objective measure shows significantly better performance than two other existing mask-based objective measures.

4.
Eur Arch Otorhinolaryngol ; 269(1): 309-13, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21544658

ABSTRACT

Thyroidectomy has few complications, as a result, many patients are concerned about the prominence of their scar. Performing thyroid surgery through excessively small incisions in order to maximise cosmesis may increase the likelihood of complications. This study investigates the relationship between conventional approach thyroidectomy scar length and patient satisfaction. A validation of self-measurement of neck circumference and thyroidectomy scar was carried out with the measurements taken by patients compared with those taken by an investigator. One hundred consecutive patients who had undergone conventional thyroidectomy and total thyroidectomy within 24 months were invited to measure their scars and neck circumference, and to score their satisfaction on a Likert scale of 1-10. Spearman's correlation was calculated for the relationship between absolute and relative scar length, and patient satisfaction. Thirty-four patients entered the preliminary study and 80 patients entered the main study (80% response rate). Measurements by patients and investigators were closely associated: Spearman's Rank correlation coefficient for neck circumference and for scar length were ρ = 0.9, p < 0.0001 and ρ = 0.93, p < 0.0001 respectively. No significant correlation was evident between scar length and patient satisfaction (ρ = 0.068, p = 0.55), or between relative scar length ratio and patient satisfaction (ρ = -0.045, p = 0.69). Mean scar length was 6.96 cm [standard deviation (SD) 2.70], and mean satisfaction score 8.62 (SD 2.04). Thyroidectomy scar length appears to have no association with patient satisfaction. Thyroid surgery should, therefore, not be performed through unnecessarily small incisions for purely aesthetic reasons.


Subject(s)
Cicatrix/psychology , Patient Satisfaction , Thyroidectomy/psychology , Adolescent , Adult , Aged , Aged, 80 and over , Cicatrix/etiology , Cicatrix/pathology , Female , Humans , Male , Middle Aged , Thyroidectomy/adverse effects , Young Adult
5.
J Acoust Soc Am ; 110(3 Pt 1): 1619-27, 2001 Sep.
Article in English | MEDLINE | ID: mdl-11572371

ABSTRACT

The minimum spectral contrast needed for vowel identification by normal-hearing and cochlear implant listeners was determined in this study. In experiment 1, a spectral modification algorithm was used that manipulated the channel amplitudes extracted from a 6-channel continuous interleaved sampling (CIS) processor to have a 1-10 dB spectral contrast. The spectrally modified amplitudes of eight natural vowels were presented to six Med-EI/CIS-link users for identification. Results showed that subjects required a 4-6 dB contrast to identify vowels with relatively high accuracy. A 4-6 dB contrast was needed independent of the individual subject's dynamic range (range 9-28 dB). Some cochlear implant (CI) users obtained significantly higher scores with vowels enhanced to 6 dB contrast compared to the original, unenhanced vowels, suggesting that spectral contrast enhancement can improve the vowel identification scores for some CI users. To determine whether the minimum spectral contrast needed for vowel identification was dependent on spectral resolution (number of channels available), vowels were processed in experiment 2 through n (n =4, 6, 8, 12) channels, and synthesized as a linear combination of n sine waves with amplitudes manipulated to have a 1-20 dB spectral contrast. For vowels processed through 4 channels, normal-hearing listeners needed a 6 dB contrast, for 6 and 8 channels a 4 dB contrast was needed, consistent with our findings with CI listeners, and for 12 channels a 1 dB contrast was sufficient to achieve high accuracy (>80%). The above-mentioned findings with normal-hearing listeners suggest that when the spectral resolution is poor, a larger spectral contrast is needed for vowel identification. Conversely, when the spectral resolution is fine, a small spectral contrast (1 dB) is sufficient. The high identification score (82%) achieved with 1 dB contrast was significantly higher than any of the scores reported in the literature using synthetic vowels, and this can be attributed to the fact that we used natural vowels which contained duration and spectral cues (e.g., formant movements) present in fluent speech. The outcomes of experiments 1 and 2, taken together, suggest that CI listeners need a larger spectral contrast (4-6 dB) than normal-hearing listeners to achieve high recognition accuracy, not because of the limited dynamic range, but because of the limited spectral resolution.


Subject(s)
Cochlear Implants , Deafness/physiopathology , Deafness/rehabilitation , Hearing , Phonetics , Speech Perception , Adult , Aged , Female , Humans , Male , Middle Aged , Reference Values
6.
IEEE Trans Neural Syst Rehabil Eng ; 9(1): 42-8, 2001 Mar.
Article in English | MEDLINE | ID: mdl-11482362

ABSTRACT

Due to the variability in performance among cochlear implant (CI) patients, it is becoming increasingly important to find ways to optimally fit patients with speech processing strategies. This paper proposes an approach based on neural networks, which can be used to automatically optimize the performance of CI patients. The neural network model is implemented in two stages. In the first stage, a neural network is trained to mimic the CI patient's performance on the vowel identification task. The trained neural network is then used in the second stage to adjust a free parameter to improve vowel recognition performance for each individual patient. The parameter examined in this study was a weighting function applied to the compressed channel amplitudes extracted from a 6-channel continuous interleaved sampling (CIS) strategy. Two types of weighting functions were examined, one which assumed channel interaction, and one which assumed no interaction between channels. Results showed that the neural network models closely matched the performance of five Med-EI/CIS-Link implant patients. The resulting weighting functions obtained after neural network training improved vowel performance, with the larger improvement (4%) attained by the weighting function which modeled channel interaction.


Subject(s)
Cochlear Implants , Deafness/physiopathology , Hearing/physiology , Neural Networks, Computer , Speech Perception/physiology , Algorithms , Humans , Sensitivity and Specificity , Task Performance and Analysis
7.
J Acoust Soc Am ; 108(5 Pt 1): 2377-87, 2000 Nov.
Article in English | MEDLINE | ID: mdl-11108378

ABSTRACT

The importance of intensity resolution in terms of the number of intensity steps needed for speech recognition was assessed for normal-hearing and cochlear implant listeners. In experiment 1, the channel amplitudes extracted from a six-channel continuous interleaved sampling (CIS) processor were quantized into 2, 4, 8, 16, or 32 steps. Consonant recognition was assessed for five cochlear implant listeners, using the Med-El/CIS-link device, as a function of the number of steps in the electrical dynamic range. Results showed that eight steps within the dynamic range are sufficient for reaching asymptotic performance in consonant recognition. These results suggest that amplitude resolution is not a major factor in determining consonant identification. In experiment 2, the relationship between spectral resolution (number of channels) and intensity resolution (number of steps) in normal-hearing listeners was investigated. Speech was filtered through 4-20 frequency bands, synthesized as a linear combination of sine waves with amplitudes extracted from the envelopes of the bandpassed waveforms, and then quantized into 2-32 levels to produce stimuli with varying degrees of intensity resolution. Results showed that the number of steps needed to achieve asymptotic performance was a function of the number of channels and the speech material used. For vowels, asymptotic performance was obtained with four steps, while for consonants, eight steps were needed for most channel conditions, consistent with our findings in experiment 1. For sentences processed though 4 channels, 16 steps were needed to reach asymptotic performance, while for sentences processed through 16 channels, 4 steps were needed. The results with normal-hearing listeners on sentence recognition point to an inverse relationship between spectral resolution and intensity resolution. When spectral resolution is poor (i.e., a small number of channels is available) a relatively fine intensity resolution is needed to achieve high levels of understanding. Conversely, when the intensity resolution is poor, a high degree of spectral resolution is needed to achieve asymptotic performance. The results of this study, taken together with previous findings on the effect of reduced dynamic range, suggest that the performance of cochlear implant subjects is primarily limited by the small number (four to six) of channels received, and not by the small number of intensity steps or reduced dynamic range.


Subject(s)
Cochlea/physiopathology , Hearing Aids , Hearing Loss, Sensorineural/rehabilitation , Speech Perception/physiology , Acoustic Stimulation/instrumentation , Adult , Aged , Equipment Design , Female , Hearing/physiology , Humans , Male , Middle Aged , Models, Biological , Phonetics
8.
J Acoust Soc Am ; 108(2): 790-802, 2000 Aug.
Article in English | MEDLINE | ID: mdl-10955646

ABSTRACT

This study investigated the effect of five speech processing parameters, currently employed in cochlear implant processors, on speech understanding. Experiment 1 examined speech recognition as a function of stimulation rate in six Med-E1/CIS-Link cochlear implant listeners. Results showed that higher stimulation rates (2100 pulses/s) produced a significantly higher performance on word and consonant recognition than lower stimulation rates (<800 pulses/s). The effect of stimulation rate on consonant recognition was highly dependent on the vowel context. The largest benefit was noted for consonants in the /uCu/ and /iCi/ contexts, while the smallest benefit was noted for consonants in the /aCa/ context. This finding suggests that the /aCa/ consonant test, which is widely used today, is not sensitive enough to parametric variations of implant processors. Experiment 2 examined vowel and consonant recognition as a function of pulse width for low-rate (400 and 800 pps) implementations of the CIS strategy. For the 400-pps condition, wider pulse widths (208 micros/phase) produced significantly higher performance on consonant recognition than shorter pulse widths (40 micros/phase). Experiments 3-5 examined vowel and consonant recognition as a function of the filter overlap in the analysis filters, shape of the amplitude mapping function, and signal bandwidth. Results showed that the amount of filter overlap (ranging from -20 to -60 dB/oct) and the signal bandwidth (ranging from 6.7 to 9.9 kHz) had no effect on phoneme recognition. The shape of the amplitude mapping functions (ranging from strongly compressive to weakly compressive) had only a minor effect on performance, with the lowest performance obtained for nearly linear mapping functions. Of the five speech processing parameters examined in this study, the pulse rate and the pulse width had the largest (positive) effect on speech recognition. For a fixed pulse width, higher rates (2100 pps) of stimulation provided a significantly better performance on word recognition than lower rates (<800 pps) of stimulation. High performance was also achieved by jointly varying the pulse rate and pulse width. The above results indicate that audiologists can optimize the implant listener's performance either by increasing the pulse rate or by jointly varying the pulse rate and pulse width.


Subject(s)
Cochlear Implantation , Speech Perception/physiology , Adult , Aged , Deafness/surgery , Female , Humans , Male , Middle Aged , Phonetics
9.
Ear Hear ; 21(1): 25-31, 2000 Feb.
Article in English | MEDLINE | ID: mdl-10708071

ABSTRACT

OBJECTIVE: To determine the effect of reduced dynamic range on speech understanding when the speech signals are processed in a manner similar to a 6-channel cochlear implant speech processor. DESIGN: Signals were processed in a manner similar to a 6-channel cochlear implant processor and output as a sum of sine waves with frequencies equal to the center frequencies of the analysis filters. The amplitudes of the sine waves were compressed in a systematic fashion to simulate the effect of reduced dynamic range. The compressed signals were presented to 10 normal-hearing listeners for identification. RESULTS: There was a significant effect of compression for all test materials. The effect of the compression on speech understanding was different for the three test materials (vowels, consonants, and sentences). Vowel recognition was affected the most by the compression, and consonant recognition was affected the least by the compression. Feature analysis indicated that the reception of place information was affected the most. Sentence recognition was moderately affected by the compression. CONCLUSIONS: Dynamic range should affect the speech perception abilities of cochlear implant users. Our results suggest that a relatively wide dynamic range is needed for a high level of vowel recognition and a relatively small dynamic range is sufficient to maintain consonant recognition. We infer from this outcome that, if other factors were held equal, an implant patient with a small dynamic range could achieve moderately high scores on tests of consonant recognition but poor performance on vowel recognition, and that it is more likely for an implant patient with a large dynamic range to obtain high scores on vowel recognition than for an implant patient with a small dynamic range.


Subject(s)
Cochlear Implants , Signal Processing, Computer-Assisted , Speech Perception , Humans
11.
Ann Otol Rhinol Laryngol Suppl ; 185: 67-8, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11141010

ABSTRACT

To assess whether more channels are needed to understand speech in noise than in quiet, we processed speech in a manner similar to that of spectral peak-like cochlear implant processors and presented it at a +2-dB signal-to-noise ratio to normal-hearing listeners for identification. The number of analysis filters varied from 8 to 16, and the number of maximum channel amplitudes selected in each cycle varied from 2 to 16. The results show that more channels are needed to understand speech in noise than in quiet, and that high levels of speech understanding can be achieved with 12 channels. Selecting more than 12 channel amplitudes out of 16 channels did not yield significant improvements in recognition performance.


Subject(s)
Cochlear Implants , Hearing/physiology , Speech Perception , Adult , Equipment Design , Humans , Middle Aged , Noise , Signal Processing, Computer-Assisted
12.
Ear Hear ; 21(6): 590-6, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11132785

ABSTRACT

OBJECTIVE: The aims of this study were 1) to determine the number of channels of stimulation needed by normal-hearing adults and children to achieve a high level of word recognition and 2) to compare the performance of normal-hearing children and adults listening to speech processed into 6 to 20 channels of stimulation with the performance of children who use the Nucleus 22 cochlear implant. DESIGN: In Experiment 1, the words from the Multisyllabic Lexical Neighborhood Test (MLNT) were processed into 6 to 20 channels and output as the sum of sine waves at the center frequency of the analysis bands. The signals were presented to normal-hearing adults and children for identification. In Experiment 2, the wideband recordings of the MLNT words were presented to early-implanted and late-implanted children who used the Nucleus 22 cochlear implant. RESULTS: Experiment 1: Normal-hearing children needed more channels of stimulation than adults to recognize words. Ten channels allowed 99% correct word recognition for adults; 12 channels allowed 92% correct word recognition for children. Experiment 2: The average level of intelligibility for both early- and late-implanted children was equivalent to that found for normal-hearing adults listening to four to six channels of stimulation. The best intelligibility for implanted children was equivalent to that found for normal-hearing adults listening to six channels of stimulation. The distribution of scores for early- and late-implanted children differed. Nineteen percent of the late-implanted children achieved scores below that allowed by a 6-channel processor. None of the early-implanted children fell into this category. CONCLUSIONS: The average implanted child must deal with a signal that is significantly degraded. This is likely to prolong the period of language acquisition. The period could be significantly shortened if implants were able to deliver at least eight functional channels of stimulation. Twelve functional channels of stimulation would provide signals near the intelligibility of wideband signals in quiet.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Speech Perception/physiology , Adult , Child, Preschool , Deafness/physiopathology , Equipment Design , Humans , Speech Intelligibility
13.
Dev Neuropsychol ; 18(2): 187-212, 2000.
Article in English | MEDLINE | ID: mdl-11280964

ABSTRACT

Elementary and junior high school children (n = 13), who were diagnosed with nonorganic failure to thrive (FTT) as infants and toddlers, were compared with a normal control group (n = 14) on visual event-related potentials (ERPs) elicited during a primed lexical decision task. Positive stimuli were real words that were identical to the priming stimuli; negative stimuli were nonpronounceable letter strings. Although the groups did not differ in word-list reading level, the former FTT group had slower reaction (decision) times and did not show ERP evidence of priming in the N400 epoch. Anterior sites yielded better separation of the real words and letter strings than posterior sites. A late anterior component between 500 msec to 650 msec poststimulus onset showed the largest condition effect for both groups. The control group had a larger negative going late anterior component to words than the FTT group. The combined reaction time and ERP findings point to less automatized word recognition in the FTT group.


Subject(s)
Cognition , Developmental Disabilities/physiopathology , Developmental Disabilities/psychology , Evoked Potentials, Visual , Failure to Thrive , Reading , Adolescent , Age Factors , Brain/physiopathology , Case-Control Studies , Child , Electroencephalography , Event-Related Potentials, P300 , Failure to Thrive/physiopathology , Failure to Thrive/psychology , Female , Follow-Up Studies , Humans , Infant , Male , Reaction Time , Word Association Tests
14.
Integr Physiol Behav Sci ; 35(4): 284-97, 2000.
Article in English | MEDLINE | ID: mdl-11330492

ABSTRACT

Sixty-five subjects, ages 8 to 12, participated in a visual electrophysiological study. Twenty-two of the subjects had received a diagnosis of nonorganic failure-to-thrive (FTT) before the age of three. The remaining 43 subjects had no history of FTT and served as Controls. IQs were obtained with the abbreviated WISC-III, and the Controls were split into two groups, LO IQ and HI IQ, to provide a LO IQ Control group with an average IQ equivalent to the FTT group. Event-related brain potentials (ERPs) were recorded from five scalp locations during a cued continuous performance task (CPT). Subjects had to press a button every time they saw the letter "X" following the letter "A" (50 targets out of 400 stimuli). During the CPT, the FTT subjects made marginally more errors of omission to targets than the LO IQ Control group and significantly more errors of omission than the HI IQ Control subjects. The groups did not differ significantly on errors of commission (false alarms) or reaction times to targets. ERP averages revealed a group difference in amplitude in a late slow wave for the 50 non-X stimuli (false targets) that followed the letter A. This difference was greatest over frontal sites, where the FTT group had a more negative going slow wave than each control group. Late frontal negativity to No Go stimuli has been linked with post-decisional processing, notably in young children. Thus, the FTT subjects may have less efficient inhibitory processes, reflected by additional late frontal activation.


Subject(s)
Failure to Thrive/physiopathology , Child , Electroencephalography , Electrophysiology , Event-Related Potentials, P300/physiology , Evoked Potentials/physiology , Female , Humans , Intelligence Tests , Male , Memory/physiology , Neuropsychological Tests , Psychomotor Performance/physiology , Racial Groups , Sex Characteristics , Wechsler Scales
15.
J Acoust Soc Am ; 106(4 Pt 1): 2097-103, 1999 Oct.
Article in English | MEDLINE | ID: mdl-10530032

ABSTRACT

Recent studies have shown that high levels of speech understanding could be achieved when the speech spectrum was divided into four channels and then reconstructed as a sum of four noise bands or sine waves with frequencies equal to the center frequencies of the channels. In these studies speech understanding was assessed using sentences produced by a single male talker. The aim of experiment 1 was to assess the number of channels necessary for a high level of speech understanding when sentences were produced by multiple talkers. In experiment 1, sentences produced by 135 different talkers were processed through n (2 < or = n < or = 16) number of channels, synthesized as a sum of n sine waves with frequencies equal to the center frequencies of the filters, and presented to normal-hearing listeners for identification. A minimum of five channels was needed to achieve a high level (90%) of speech understanding. Asymptotic performance was achieved with eight channels, at least for the speech material used in this study. The outcome of experiment 1 demonstrated that the number of channels needed to reach asymptotic performance varies as a function of the recognition task and/or need for listeners to attend to fine phonetic detail. In experiment 2, sentences were processed through 6 and 16 channels and quantized into a small number of steps. The purpose of this experiment was to investigate whether listeners use across-channel differences in amplitude to code frequency information, particularly when speech is processed through a small number of channels. For sentences processed through six channels there was a significant reduction in speech understanding when the spectral amplitudes were quantized into a small number (< 8) of steps. High levels (92%) of speech understanding were maintained for sentences processed through 16 channels and quantized into only 2 steps. The findings of experiment 2 suggest an inverse relationship between the importance of spectral amplitude resolution (number of steps) and spectral resolution (number of channels).


Subject(s)
Acoustic Stimulation/instrumentation , Cognition/physiology , Speech Perception/physiology , Adult , Equipment Design , Humans , Male , Models, Biological
16.
J Appl Physiol (1985) ; 87(2): 530-7, 1999 Aug.
Article in English | MEDLINE | ID: mdl-10444609

ABSTRACT

Approximate entropy (ApEn) is a statistic that quantifies regularity in time series data, and this parameter has several features that make it attractive for analyzing physiological systems. In this study, ApEn was used to detect nonlinearities in the heart rate (HR) patterns of 12 low-risk human fetuses between 38 and 40 wk of gestation. The fetal cardiac electrical signal was sampled at a rate of 1,024 Hz by using Ag-AgCl electrodes positioned across the mother's abdomen, and fetal R waves were extracted by using adaptive signal processing techniques. To test for nonlinearity, ApEn for the original HR time series was compared with ApEn for three dynamic models: temporally uncorrelated noise, linearly correlated noise, and linearly correlated noise with nonlinear distortion. Each model had the same mean and SD in HR as the original time series, and one model also preserved the Fourier power spectrum. We estimated that noise accounted for 17.2-44.5% of the total between-fetus variance in ApEn. Nevertheless, ApEn for the original time series data still differed significantly from ApEn for the three dynamic models for both group comparisons and individual fetuses. We concluded that the HR time series, in low-risk human fetuses, could not be modeled as temporally uncorrelated noise, linearly correlated noise, or static filtering of linearly correlated noise.


Subject(s)
Fetus/physiology , Heart Rate/physiology , Algorithms , Electrocardiography , Female , Gestational Age , Humans , Linear Models , Nonlinear Dynamics , Pregnancy , Pregnancy Trimester, Third , Risk Factors , Ultrasonography, Prenatal
17.
Dev Psychobiol ; 35(1): 25-34, 1999 Jul.
Article in English | MEDLINE | ID: mdl-10397893

ABSTRACT

Homeostasis is maintained primarily by the parasympathetic nervous system and is thought to provide a physiological substrate for the development of complex behaviors. This investigation was undertaken to test the hypothesis that infants with high parasympathetic tone are more efficient regulators of homeostasis than infants with low parasympathetic tone. Respiratory sinus arrhythmia (RSA) was used as a measure of parasympathetic tone, and the efficiency of homeostatic control was quantified, for each infant, by the slope (SRSA) and correlation coefficient (RRSA) of the regression line relating fluctuations in heart period and fluctuations in RSA. To test our hypothesis, we examined the relationship between RSA and both SRSA and RRSA in 34 low-risk human fetuses between 36 and 40 weeks gestation. We found that fetuses who were parasympathetic-dominated had larger SRSA and RRSA values, and hence were more efficient regulators of homeostasis, than fetuses who were sympathetic-dominated. The results of our analyses are important because they establish, very early in development, a physiological basis for the relationship between vagal tone and the development of complex behaviors.


Subject(s)
Arousal/physiology , Embryonic and Fetal Development/physiology , Homeostasis/physiology , Vagus Nerve/physiology , Female , Heart Rate/physiology , Humans , Infant, Newborn , Male , Parasympathetic Nervous System/physiology , Pregnancy , Pregnancy Trimester, Third , Reference Values
19.
IEEE Eng Med Biol Mag ; 18(1): 32-42, 1999.
Article in English | MEDLINE | ID: mdl-9934598

ABSTRACT

Cochlear implants have been very successful in restoring partial hearing to profoundly deaf people. Many individuals with implants are now able to communicate and understand speech without lip-reading, and some are able to talk over the phone. Children with implants can develop spoken-language skills and attend normal schools (i.e., schools with normal-hearing children). The greatest benefits with cochlear implantation have occurred in patients who (1) acquired speech and language before their hearing loss, and (2) have shorter duration of deafness. Gradual, but steady, improvements in speech production and speech perception have also occurred in prelingually deafened adults or children.


Subject(s)
Cochlear Implants , Deafness/rehabilitation , Hearing Loss/rehabilitation , Adult , Age of Onset , Audiology , Child , Deafness/epidemiology , Electrodes , Hearing Loss/diagnosis , Humans , Prosthesis Design , Signal Processing, Computer-Assisted , Speech Perception
20.
J Acoust Soc Am ; 104(6): 3583-5, 1998 Dec.
Article in English | MEDLINE | ID: mdl-9857516

ABSTRACT

Sentences were processed through simulations of cochlear-implant signal processors with 6, 8, 12, 16, and 20 channels and were presented to normal-hearing listeners at +2 db S/N and at -2 db S/N. The signal-processing operations included bandpass filtering, rectification, and smoothing of the signal in each band, estimation of the rms energy of the signal in each band (computed every 4 ms), and generation of sinusoids with frequencies equal to the center frequencies of the bands and amplitudes equal to the rms levels in each band. The sinusoids were summed and presented to listeners for identification. At issue was the number of channels necessary to reach maximum performance on tests of sentence understanding. At +2 dB S/N, the performance maximum was reached with 12 channels of stimulation. At -2 dB S/N, the performance maximum was reached with 20 channels of stimulation. These results, in combination with the outcome that in quiet, asymptotic performance is reached with five channels of stimulation, demonstrate that more channels are needed in noise than in quiet to reach a high level of sentence understanding and that, as the S/N becomes poorer, more channels are needed to achieve a given level of performance.


Subject(s)
Cochlear Implants , Hearing/physiology , Noise/adverse effects , Speech Perception/physiology , Acoustic Stimulation/instrumentation , Adult , Equipment Design , Humans , Middle Aged
SELECTION OF CITATIONS
SEARCH DETAIL
...