Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sleep Breath ; 26(1): 215-224, 2022 03.
Article in English | MEDLINE | ID: mdl-33956293

ABSTRACT

PURPOSE: The effect of snoring on the bed partner can be studied through the evaluation of in situ sound records by the bed partner or unspecialized raters as a proxy of real-life snoring perception. The aim was to characterize perceptual snore events through acoustical features in patients with obstructive sleep apnea (OSA) with an advanced mandibular position. METHODS: Thirty-minute sound samples of 29 patients with OSA were retrieved from overnight, in-home recordings of a study to validate the MATRx plus® dynamic mandibular advancement system. Three unspecialized raters identified sound events and classified them as noise, snore, or breathing. The raters provided ratings for classification certainty and annoyance. Data were analyzed with respect to respiratory phases, and annoyance. RESULTS: When subdividing perceptual events based on respiratory phase, the logarithm-transformed Mean Power, Spectral Centroid, and Snore Factor differed significantly between event types, although not substantially for the spectral centroid. The variability within event type was high and distributions suggested the presence of subpopulations. The general linear model (GLM) showed a significant patient effect. Inspiration segments occurred in 65% of snore events, expiration segments in 54%. The annoyance correlated with the logarithm of mean power (r = 0.48) and the Snore Factor (0.46). CONCLUSION: Perceptual sound events identified by non-experts contain a non-negligible mixture of expiration and inspiration phases making the characterization through acoustical features complex. The present study reveals that subpopulations may exist, and patient-specific features need to be introduced.


Subject(s)
Acoustics , Sleep Apnea, Obstructive/complications , Sleep Apnea, Obstructive/therapy , Snoring/diagnosis , Snoring/etiology , Sound , Adult , Female , Humans , Male , Middle Aged
2.
Sleep Breath ; 26(1): 75-80, 2022 03.
Article in English | MEDLINE | ID: mdl-33797031

ABSTRACT

PURPOSE: The perceptual burden and social nuisance for mainly the co-sleeper can affect the relationship between snorer and bedpartner. Mandibular advancement devices (MAD) are commonly recommended to treat sleep-related breathing such as snoring or sleep apnea. There is no consensus about the definition of snoring particularly with MAD, which is essential for assessing the effectiveness of treatment. We aimed to stablish a notion of perceptual snoring with MAD in place. METHODS: Sound samples, each 30 min long, were recorded during in-home, overnight, automatic mandibular repositioning titration studies in a population of 29 patients with obstructive sleep apnea syndrome (OSAS) from a clinical trial carried out to validate the MATRx plus. Three unspecialized and calibrated raters identified sound events and classified them as noise, snore, or breathing as well as providing scores for classification certainty and annoyance. Data were analyzed with respect to expiration-inspiration, duration, annoyance, and classification certainty. RESULTS: A Fleiss' kappa (>0.80) and correlation duration of events (>0.90) between raters were observed. Prevalence of all breath sounds: snore 55.6% (N = 6398), breathing sounds 31.7% (N = 3652), and noise 9.3% (N = 1072). Inspiration occurs in 88.3% of events, 96.8% contained at least on expiration phase. Snore and breath events had similar duration, respectively 2.58s (sd 1.43) and 2.41s (sd 1.22). Annoyance is lowest for breathing events (8.00 sd 0.98) and highest for snore events (4.90 sd 1.92) on a VAS from zero to ten. CONCLUSION: Perceptual sound events can be a basis for analysis in a psychosocial context. Perceived snoring occurs during both expiration as well as inspiration. Substantial amount of snoring remains despite repositioning of the mandible aimed at the reduction of AHI-ODI.


Subject(s)
Respiratory Sounds , Severity of Illness Index , Sleep Apnea, Obstructive/physiopathology , Snoring/physiopathology , Female , Humans , Male , Polysomnography , Respiration , Sound Spectrography
3.
J Clin Sleep Med ; 18(3): 911-919, 2022 Mar 01.
Article in English | MEDLINE | ID: mdl-34747691

ABSTRACT

STUDY OBJECTIVES: Oral appliance therapy is not commonly used to treat obstructive sleep apnea due to inconsistent efficacy and lack of established configuration procedures. Both problems may be overcome by information gathered while repositioning the mandible during sleep. The purpose of this investigation was to determine if an unattended sleep study with a mandibular positioner can predict therapeutic success and efficacious mandibular position, assess the contribution of artificial intelligence analytics to such a system, and evaluate symptom resolution using an objective titration approach. METHODS: Fifty-eight individuals with obstructive sleep apnea underwent an unattended sleep study with an auto-adjusting mandibular positioner followed by fitting of a custom oral appliance. Therapeutic outcome was assessed by the 4% oxygen desaturation index with therapeutic success defined as oxygen desaturation index < 10 h-1. Outcome was prospectively predicted by an artificial intelligence system and a heuristic, rule-based method. An efficacious mandibular position was also prospectively predicted by the test. Data on obstructive sleep apnea symptom resolution were collected 6 months following initiation of oral appliance therapy. RESULTS: The artificial intelligence method had significantly higher predictive accuracy (sensitivity: 0.91, specificity: 1.00) than the heuristic method (P = .016). The predicted efficacious mandibular position was associated with therapeutic success in 83% of responders. Appliances titrated based on oxygen desaturation index effectively resolved obstructive sleep apnea symptoms. CONCLUSIONS: The MATRx plus device provides an accurate means for predicting outcome to oral appliance therapy in the home environment and offers a replacement to blind titration of oral appliances. CLINICAL TRIAL REGISTRATION: Registry: ClinicalTrials.gov; Name: Predictive Accuracy of MATRx plus in Identifying Favorable Candidates for Oral Appliance Therapy; Identifier: NCT03217383; URL: https://clinicaltrials.gov/ct2/show/NCT03217383. CITATION: Mosca EV, Bruehlmann S, Zouboules SM, et al. In-home mandibular repositioning during sleep using MATRx plus predicts outcome and efficacious positioning for oral appliance treatment of obstructive sleep apnea. J Clin Sleep Med. 2022;18(3):911-919.


Subject(s)
Mandibular Advancement , Sleep Apnea, Obstructive , Artificial Intelligence , Humans , Mandible , Mandibular Advancement/methods , Sleep , Sleep Apnea, Obstructive/therapy , Treatment Outcome
4.
Brain Lang ; 190: 1-9, 2019 03.
Article in English | MEDLINE | ID: mdl-30616147

ABSTRACT

Attention is crucial to speech comprehension in real-world, noisy environments. Selective phase-tracking between low-frequency brain dynamics and the envelope of target speech is a proposed mechanism to reject competing distractors. Studies have supported this theory in the case of a single distractor, but have not considered how tracking is systematically affected by varying distractor set sizes. We recorded electroencephalography (EEG) during selective listening to both natural and vocoded speech as distractor set-size varied from two to six voices. Increasing set-size reduced performance and attenuated EEG tracking of target speech. Further, we found that intrusions of distractor speech into perception were not accompanied by sustained tracking of the distractor stream. Our results support the theory that tracking of speech dynamics is a mechanism for selective attention, and that the mechanism of distraction is not simple stimulus-driven capture of sustained entrainment of auditory mechanisms by the acoustics of distracting speech.


Subject(s)
Attention/physiology , Speech Acoustics , Speech Perception/physiology , Brain/physiology , Electroencephalography , Female , Humans , Male , Young Adult
5.
Neuropsychologia ; 121: 58-68, 2018 12.
Article in English | MEDLINE | ID: mdl-30385119

ABSTRACT

Speech is perceived as a continuous stream of words despite consisting of a discontinuous, quasi-periodic signal of interleaved sounds and silences. Speech perception is surprisingly robust to interference by interruption, however speech that is replaced by gaps of silence is difficult to understand. When those silences are filled with noise, the speech is once again perceived as continuous even when the underlying speech sounds are removed completely. This is a phenomenon known as phonemic restoration. Perception of normal speech is accompanied by robust phase-locking of EEG signals to acoustic and linguistic features of speech. In this study we test the theory that interrupting speech with silence impairs perception by interfering with neural speech tracking. Further, we test the theory that we can restore perception and phase-tracking of the original acoustics by inserting noise in the interruptions. We find that disruptions of the acoustic envelope reduce the tracking of both acoustic and phonemic features. By inserting amplitude modulated noise such that the original broadband envelope is restored, we improved perception of the degraded speech and restored the magnitude of the speech tracking response; however, topographic analysis suggests that the neural response to noise-interrupted speech may recruit systematically different brain areas. The acoustic envelope seems to be an important physical component of speech that facilitates the dynamic neural mechanisms for perception of spoken language, particularly in adverse listening conditions.


Subject(s)
Brain/physiology , Speech Perception/physiology , Electroencephalography , Female , Humans , Male , Mental Recall/physiology , Periodicity , Phonetics , Psycholinguistics , Young Adult
6.
PLoS One ; 12(10): e0186104, 2017.
Article in English | MEDLINE | ID: mdl-28982139

ABSTRACT

The process of resolving mixtures of several sounds into their separate individual streams is known as auditory scene analysis and it remains a challenging task for computational systems. It is well-known that animals use binaural differences in arrival time and intensity at the two ears to find the arrival angle of sounds in the azimuthal plane, and this localization function has sometimes been considered sufficient to enable the un-mixing of complex scenes. However, the ability of such systems to resolve distinct sound sources in both space and frequency remains limited. The neural computations for detecting interaural time difference (ITD) have been well studied and have served as the inspiration for computational auditory scene analysis systems, however a crucial limitation of ITD models is that they produce ambiguous or "phantom" images in the scene. This has been thought to limit their usefulness at frequencies above about 1khz in humans. We present a simple Bayesian model and an implementation on a robot that uses ITD information recursively. The model makes use of head rotations to show that ITD information is sufficient to unambiguously resolve sound sources in both space and frequency. Contrary to commonly held assumptions about sound localization, we show that the ITD cue used with high-frequency sound can provide accurate and unambiguous localization and resolution of competing sounds. Our findings suggest that an "active hearing" approach could be useful in robotic systems that operate in natural, noisy settings. We also suggest that neurophysiological models of sound localization in animals could benefit from revision to include the influence of top-down memory and sensorimotor integration across head rotations.


Subject(s)
Attention , Bayes Theorem , Head/physiology , Hearing , Models, Theoretical , Algorithms , Excitatory Postsynaptic Potentials , Humans
7.
Brain Lang ; 135: 52-6, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24911919

ABSTRACT

It is usually easy to understand speech, but when several people are talking at once it becomes difficult. The brain must select one speech stream and ignore distracting streams. We tested a theory about the neural and computational mechanisms of attentional selection. The theory is that oscillating signals in brain networks phase-lock with amplitude fluctuations in speech. By doing this, brain-wide networks acquire information from the selected speech, but ignore other speech signals on the basis of their non-preferred dynamics. Two predictions were supported: first, attentional selection boosted the power of neuroelectric signals that were phase-locked with attended speech, but not ignored speech. Second, this phase selectivity was associated with better recall of the attended speech.


Subject(s)
Attention/physiology , Brain/physiology , Speech Perception/physiology , Speech , Theta Rhythm/physiology , Electroencephalography , Female , Humans , Male , Mental Recall/physiology , Models, Neurological , Young Adult
8.
Hear Res ; 304: 77-90, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23831040

ABSTRACT

Perception of objects in the scene around us is effortless and intuitive, yet entails profound computational challenges. Progress has been made in understanding some mechanisms by which the brain encodes the boundaries and surfaces of visual objects. However, in the auditory domain, these mechanisms are poorly understood. We investigated differences between neural responses to spectrotemporal boundaries in the auditory scene. We used iterated rippled noise to create perceptual boundaries with and without energy transients. In contrast to boundaries marked by energy transients, second-order boundaries were characterized by an absence of early components in the event-related potential. First-order energy boundaries triggered a transient evoked gamma-band response and a well-defined P90 component of the event-related potential, whereas second-order boundaries evoked only the later N1 component. Furthermore, the N1 component was delayed when evoked by second-order boundaries and theta-band electroencephalography activity at this latency exhibited significant phase lag for second-order compared to first-order boundaries. We speculate that boundaries defined by sharp energy transients can be registered by early feed-forward mechanisms. By contrast, boundaries defined only by discontinuities at discrete frequency bands require integration across the tonotopic representation of the frequency spectrum and require time-consuming interaction between auditory areas.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Acoustic Stimulation , Auditory Cortex/physiology , Brain Mapping , Electroencephalography , Evoked Potentials, Auditory , Female , Humans , Male , Young Adult
9.
PLoS One ; 8(1): e53953, 2013.
Article in English | MEDLINE | ID: mdl-23326548

ABSTRACT

Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech) or low (white noise) distraction. High-distraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Electroencephalography , Adult , Evoked Potentials, Auditory/physiology , Female , Humans , Noise , Pitch Discrimination/physiology , Reaction Time/physiology , Speech Perception/physiology , Young Adult
10.
Neuroreport ; 23(4): 240-5, 2012 Mar 07.
Article in English | MEDLINE | ID: mdl-22314684

ABSTRACT

Selective attention involves the exclusion of irrelevant information in order to optimize perception of a single source of sensory input; failure to do so often results in the familiar phenomenon of distraction. The term 'distraction' broadly refers to a perceptual phenomenon. In the present study we attempted to find the electrophysiological correlates of distraction using an auditory discrimination task. EEG and event-related potential responses to identical stimuli were compared under two levels of distraction (continuous broad-band noise or continuous speech). Relative to broad-band noise, the presence of a continuous speech signal in the unattended ear impaired task performance and also attenuated the N1 peak evoked by nontarget stimuli in the attended ear. As the magnitude of a peak in the event-related potential waveform can be modulated by differences in intertrial power but also by differences in the stability of EEG phase across trials, we sought to characterize the effect of distraction on intertrial power and intertrial phase locking around the latency of the N1. The presence of continuous speech resulted in a prominent reduction of theta EEG band intertrial phase locking around the latency of the N1. This suggests that distraction may act not only to disrupt a sensory gain mechanism but also to disrupt the temporal fidelity with which the brain responds to stimulus events.


Subject(s)
Attention/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Electroencephalography/methods , Evoked Potentials/physiology , Perceptual Masking/physiology , Theta Rhythm/physiology , Acoustic Stimulation/methods , Auditory Threshold/physiology , Discrimination Learning/physiology , Female , Functional Laterality/physiology , Humans , Male , Reaction Time/physiology , Speech Perception/physiology , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...