Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 47
Filter
1.
JACC Basic Transl Sci ; 9(5): 674-686, 2024 May.
Article in English | MEDLINE | ID: mdl-38984052

ABSTRACT

The adult mammalian heart harbors minute levels of cycling cardiomyocytes (CMs). Large numbers of images are needed to accurately quantify cycling events using microscopy-based methods. CardioCount is a new deep learning-based pipeline to rigorously score nuclei in microscopic images. When applied to a repository of 368,434 human microscopic images, we found evidence of coupled growth between CMs and cardiac endothelial cells in the adult human heart. Additionally, we found that vascular rarefaction and CM hypertrophy are interrelated in end-stage heart failure. CardioCount is available for use via GitHub and via Google Colab for users with minimal machine learning experience.

2.
J Acoust Soc Am ; 155(3): 2151-2168, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-38501923

ABSTRACT

Cochlear implant (CI) recipients often struggle to understand speech in reverberant environments. Speech enhancement algorithms could restore speech perception for CI listeners by removing reverberant artifacts from the CI stimulation pattern. Listening studies, either with cochlear-implant recipients or normal-hearing (NH) listeners using a CI acoustic model, provide a benchmark for speech intelligibility improvements conferred by the enhancement algorithm but are costly and time consuming. To reduce the associated costs during algorithm development, speech intelligibility could be estimated offline using objective intelligibility measures. Previous evaluations of objective measures that considered CIs primarily assessed the combined impact of noise and reverberation and employed highly accurate enhancement algorithms. To facilitate the development of enhancement algorithms, we evaluate twelve objective measures in reverberant-only conditions characterized by a gradual reduction of reverberant artifacts, simulating the performance of an enhancement algorithm during development. Measures are validated against the performance of NH listeners using a CI acoustic model. To enhance compatibility with reverberant CI-processed signals, measure performance was assessed after modifying the reference signal and spectral filterbank. Measures leveraging the speech-to-reverberant ratio, cepstral distance and, after modifying the reference or filterbank, envelope correlation are strong predictors of intelligibility for reverberant CI-processed speech.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Intelligibility , Algorithms , Hearing
3.
J Racial Ethn Health Disparities ; 11(2): 1056-1066, 2024 04.
Article in English | MEDLINE | ID: mdl-38315291

ABSTRACT

Given the disproportionate representation in HIV/AIDS cases among young Black members of the LGBTQIA community, it is important to continue to identify both their ability to assess the knowledge that can foster healthier sexual outcomes as well as dynamics that may foster or undermine their efforts. The goal of this study is to examine whether 236 young Black persons ages 18-30 years old who are members of the LGBTQIA community know where to go locally to locate healthcare services to combat HIV/AIDS and other sexually transmitted health issues. Quantitative findings show the influence of self-identified sexual identity, age, and place of residence on knowledge about HIV-related services. The implications of these results illustrate the possible effects of place and identity development on knowledge about HIV-related services that can affect life chances and quality of life for certain members of this community.


Subject(s)
Acquired Immunodeficiency Syndrome , HIV Infections , Health Services Accessibility , Sexually Transmitted Diseases , Adolescent , Adult , Humans , Young Adult , Sexual Behavior , Black or African American , Sexual and Gender Minorities , Health Knowledge, Attitudes, Practice
4.
Cochlear Implants Int ; 23(6): 309-316, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35875863

ABSTRACT

Cochlear implant recipients struggle to understand speech in reverberant environments. To restore speech perception, artifacts due to reverberant reflections can be removed from the cochlear implant stimulus by applying a matrix of gain values, a technique referred to as time-frequency masking. In this study, two common time-frequency masking strategies are implemented within cochlear implant processing, either introducing complete retention or deletion of stimulus components using a binary mask or continuous attenuation of stimulus components using a ratio mask. Parameters of each masking strategy control the level of attenuation imposed by the gain values. In this study, we perceptually tune the parameters of the masking strategy to determine a balance between speech retention and artifact removal. We measure the intelligibility of reverberant signals mitigated by each strategy with speech recognition testing in normal-hearing listeners using vocoding as a simulation of cochlear implant perception. For both masking strategies, we find parameterizations that maximize the intelligibility of the mitigated signals. At the best-performing parameterizations, binary-masked reverberant signals yield larger intelligibility improvements than ratio-masked signals. The results provide a perceptually optimized objective for the removal of reverberant artifacts from cochlear implant stimuli, facilitating improved speech recognition performance for cochlear implant recipients in reverberant environments.


Subject(s)
Cochlear Implantation , Cochlear Implants , Speech Perception , Acoustic Stimulation/methods , Algorithms , Artifacts , Humans , Perceptual Masking , Speech Intelligibility
5.
Proc Meet Acoust ; 50(1)2022 Dec.
Article in English | MEDLINE | ID: mdl-38031629

ABSTRACT

Cochlear implant (CI) users experience considerable difficulty in understanding speech in reverberant listening environments. This issue is commonly addressed with time-frequency masking, where a time-frequency decomposed reverberant signal is multiplied by a matrix of gain values to suppress reverberation. However, mask estimation is challenging in reverberant environments due to the large spectro-temporal variations in the speech signal. To overcome this variability, we previously developed a phoneme-based algorithm that selects a different mask estimation model based on the underlying phoneme. In the ideal case where knowledge of the phoneme was assumed, the phoneme-based approach provided larger benefits than a phoneme-independent approach when tested in normal-hearing listeners using an acoustic model of CI processing. The current work investigates the phoneme-based mask estimation algorithm in the real-time feasible case where the prediction from a phoneme classifier is used to select the phoneme-specific mask. To further ensure real-time feasibility, both the phoneme classifier and mask estimation algorithm use causal features extracted from within the CI processing framework. We conducted experiments in normal-hearing listeners using an acoustic model of CI processing, and the results showed that the phoneme-specific algorithm benefitted the majority of subjects.

6.
Conf Proc IEEE Int Conf Syst Man Cybern ; 2022: 1642-1647, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36776946

ABSTRACT

Brain-computer interfaces (BCIs), such as the P300 speller, can provide a means of communication for individuals with severe neuromuscular limitations. BCIs interpret electroencephalography (EEG) signals in order to translate embedded information about a user's intent into executable commands to control external devices. However, EEG signals are inherently noisy and nonstationary, posing a challenge to extended BCI use. Conventionally, a BCI classifier is trained via supervised learning in an offline calibration session; once trained, the classifier is deployed for online use and is not updated. As the statistics of a user's EEG data change over time, the performance of a static classifier may decline with extended use. It is therefore desirable to automatically adapt the classifier to current data statistics without requiring offline recalibration. In an existing semi-supervised learning approach, the classifier is trained on labeled EEG data and is then updated using incoming unlabeled EEG data and classifier-predicted labels. To reduce the risk of learning from incorrect predictions, a threshold is imposed to exclude unlabeled data with low-confidence label predictions from the expanded training set when retraining the adaptive classifier. In this work, we propose the use of a language model for spelling error correction and disambiguation to provide information about label correctness during semi-supervised learning. Results from simulations with multi-session P300 speller user EEG data demonstrate that our language-guided semi-supervised approach significantly improves spelling accuracy relative to conventional BCI calibration and threshold-based semi-supervised learning.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 5796-5799, 2021 11.
Article in English | MEDLINE | ID: mdl-34892437

ABSTRACT

Stimulus-driven brain-computer interfaces (BCIs), such as the P300 speller, rely on using sensory stimuli to elicit specific neural signal components called event-related potentials (ERPs) to control external devices. However, psychophysical factors, such as refractory effects and adjacency distractions, may negatively impact ERP elicitation and BCI performance. Although conventional BCI stimulus presentation paradigms usually design stimulus presentation schedules in a pseudo-random manner, recent studies have shown that controlling the stimulus selection process can enhance ERP elicitation. In prior work, we developed an algorithm to adaptively select BCI stimuli using an objective criterion that maximizes the amount of information about the user's intent that can be elicited with the presented stimuli given current data conditions. Here, we enhance this adaptive BCI stimulus selection algorithm to mitigate adjacency distractions and refractory effects by modeling temporal dependencies of ERP elicitation in the objective function and imposing spatial restrictions in the stimulus search space. Results from simulations using synthetic data and human data from a BCI study show that the enhanced adaptive stimulus selection algorithm can improve spelling speeds relative to conventional BCI stimulus presentation paradigms.Clinical relevance-Increased communication rates with our enhanced adaptive stimulus selection algorithm can potentially facilitate the translation of BCIs as viable communication alternatives for individuals with severe neuromuscular limitations.


Subject(s)
Brain-Computer Interfaces , Algorithms , Electroencephalography , Event-Related Potentials, P300 , Evoked Potentials , Humans
8.
Article in English | MEDLINE | ID: mdl-34512195

ABSTRACT

Speech intelligibility in cochlear implant (CI) users degrades considerably in listening environments with reverberation and noise. Previous research in automatic speech recognition (ASR) has shown that phoneme-based speech enhancement algorithms improve ASR system performance in reverberant environments as compared to a global model. However, phoneme-specific speech processing has not yet been implemented in CIs. In this paper, we propose a causal deep learning framework for classifying phonemes using features extracted at the time-frequency resolution of a CI processor. We trained and tested long short-term memory networks to classify phonemes and manner of articulation in anechoic and reverberant conditions. The results showed that CI-inspired features provide slightly higher levels of performance than traditional ASR features. To the best of our knowledge, this study is the first to provide a classification framework with the potential to categorize phonetic units in real-time in a CI.

9.
J Am Heart Assoc ; 10(6): e018588, 2021 03 16.
Article in English | MEDLINE | ID: mdl-33660516

ABSTRACT

Background Although technological advances to pump design have improved survival, left ventricular assist device (LVAD) recipients experience variable improvements in quality of life. Methods for optimizing LVAD support to improve quality of life are needed. We investigated whether acoustic signatures obtained from digital stethoscopes can predict patient-centered outcomes in LVAD recipients. Methods and Results We followed precordial sounds over 6 months in 24 LVAD recipients (8 HeartWare HVAD™, 16 HeartMate 3 [HM3]). Subjects recorded their precordial sounds with a digital stethoscope and completed a Kansas City Cardiomyopathy Questionnaire weekly. We developed a novel algorithm to filter LVAD sounds from recordings. Unsupervised clustering of LVAD-mitigated sounds revealed distinct groups of acoustic features. Of 16 HM3 recipients, 6 (38%) had a unique acoustic feature that we have termed the pulse synchronized sound based on its temporal association with the artificial pulse of the HM3. HM3 recipients with the pulse synchronized sound had significantly better Kansas City Cardiomyopathy Questionnaire scores at baseline (median, 89.1 [interquartile range, 86.2-90.4] versus 66.1 [interquartile range, 31.1-73.7]; P=0.03) and over the 6-month study period (marginal mean, 77.6 [95% CI, 66.3-88.9] versus 59.9 [95% CI, 47.9-70.0]; P<0.001). Mechanistically, the pulse synchronized sound shares acoustic features with patient-derived intrinsic sounds. Finally, we developed a machine learning algorithm to automatically detect the pulse synchronized sound within precordial sounds (area under the curve, 0.95, leave-one-subject-out cross-validation). Conclusions We have identified a novel acoustic biomarker associated with better quality of life in HM3 LVAD recipients, which may provide a method for assaying optimized LVAD support.


Subject(s)
Diagnostic Techniques, Cardiovascular , Heart Failure/diagnosis , Heart-Assist Devices , Quality of Life , Acoustics , Aged , Female , Follow-Up Studies , Heart Failure/psychology , Heart Failure/therapy , Humans , Male , Middle Aged
10.
IEEE Trans Biomed Eng ; 68(10): 3009-3018, 2021 10.
Article in English | MEDLINE | ID: mdl-33606625

ABSTRACT

OBJECTIVE: LVADs are surgically implanted mechanical pumps that improve survival rates of individuals with advanced heart failure. LVAD therapy is associated with high morbidity, which can be partially attributed to challenges with detecting LVAD complications before adverse events occur. Current methods used to monitor for complications with LVAD support require frequent clinical assessments at specialized LVAD centers. Analysis of recorded precordial sounds may enable real-time, remote monitoring of device and cardiac function for early detection of LVAD complications. The dominance of LVAD sounds in the precordium limits the utility of routine cardiac auscultation of LVAD recipients. In this work, we develop a signal processing pipeline to mitigate sounds generated by the LVAD. METHODS: We collected in vivo precordial sounds from 17 LVAD recipients, and contemporaneous echocardiograms from 12 of these individuals, to validate heart valve closure timings. RESULTS: We characterized various acoustic signatures of heart sounds extracted from in vivo recordings, and report preliminary findings linking fundamental heart sound characteristics and level of LVAD support. CONCLUSION: Mitigation of LVAD sounds from precordial sound recordings of LVAD recipients enables analysis of intrinsic heart sounds. SIGNIFICANCE: These findings provide proof-of-concept evidence of the clinical utility of heart sound analysis for bedside and remote monitoring of LVAD recipients.


Subject(s)
Heart Failure , Heart Sounds , Heart-Assist Devices , Acoustics , Heart Failure/diagnosis , Humans , Sound
11.
Article in English | MEDLINE | ID: mdl-33078056

ABSTRACT

Cochlear implant (CI) users experience substantial difficulties in understanding reverberant speech. A previous study proposed a strategy that leverages automatic speech recognition (ASR) to recognize reverberant speech and speech synthesis to translate the recognized text into anechoic speech. However, the strategy was trained and tested on the same reverberant environment, so it is unknown whether the strategy is robust to unseen environments. Thus, the current study investigated the performance of the previously proposed algorithm in multiple unseen environments. First, an ASR system was trained on anechoic and reverberant speech using different room types. Next, a speech synthesizer was trained to generate speech from the text predicted by the ASR system. Experiments were conducted in normal hearing listeners using vocoded speech, and the results showed that the strategy improved speech intelligibility in previously unseen conditions. These results suggest that the ASR-synthesis strategy can potentially benefit CI users in everyday reverberant environments.

12.
J Eng Sci Med Diagn Ther ; 2(2): 0245011-245014, 2019 May.
Article in English | MEDLINE | ID: mdl-35832210

ABSTRACT

Left ventricular assist devices (LVADs) are life-saving, surgically implanted mechanical heart pumps used to treat patients with advanced heart failure (HF). While life-saving, LVAD support is associated with a high incidence of complications, making early recognition and management of LVAD complications a critical need. Blood clot formation within the LVAD, known as LVAD thrombosis, is a catastrophic complication of LVAD therapy that often requires LVAD exchange due to delayed diagnosis and treatment. Using digital stethoscopes, we identified differences in acoustic spectra from two patients presenting with LVAD thrombosis compared with normally functioning LVAD pumps within the same patient. Importantly, these acoustic changes were present even in the absence of typical signs of HF that are often present in LVAD thrombosis patients. Our work suggests that acoustic spectral analysis of digital stethoscope signals could be used for early detection and mitigation of LVAD complications.

13.
Proc Meet Acoust ; 33(1)2018 May 07.
Article in English | MEDLINE | ID: mdl-32582407

ABSTRACT

In listening environments with room reverberation and background noise, cochlear implant (CI) users experience substantial difficulties in understanding speech. Because everyday environments have different combinations of reverberation and noise, there is a need to develop algorithms that can mitigate both effects to improve speech intelligibility. Desmond et al. (2014) developed a machine learning approach to mitigate the adverse effects of late reverberant reflections of speech signals by using a classifier to detect and remove affected segments in CI pulse trains. This study aimed to investigate the robustness of the reverberation mitigation algorithm in environments with both reverberation and noise. Sentence recognition tests were conducted in normal hearing listeners using vocoded speech with unmitigated and mitigated reverberant-only or noisy reverberant speech signals, across different reverberation times and noise types. Improvements in speech intelligibility were observed in mitigated reverberant-only conditions. However, mixed results were obtained in the mitigated noisy reverberant conditions as a reduction in speech intelligibility was observed for noise types whose spectra were similar to that of anechoic speech. Based on these results, the focus of future work is to develop a context-dependent approach that activates different mitigation strategies for different acoustic environments.

14.
Proc Int Conf Mach Learn Appl ; 2018: 847-852, 2018 Dec.
Article in English | MEDLINE | ID: mdl-32016173

ABSTRACT

Individuals with cochlear implants (CIs) experience more difficulty understanding speech in reverberant environ-ments than normal hearing listeners. As a result, recent research has targeted mitigating the effects of late reverberant signal reflections in CIs by using a machine learning approach to detect and delete affected segments in the CI stimulus pattern. Previous work has trained electrode-specific classification models to mitigate late reverberant signal reflections based on features extracted from only the acoustic activity within the electrode of interest. Since adjacent CI electrodes tend to be activated concurrently during speech, we hypothesized that incorporating additional information from the other electrode channels, termed cross-channel information, as features could improve classification performance. Cross-channel information extracted in real-world conditions will likely contain errors that will impact classification performance. To simulate extracting cross-channel information in realistic conditions, we developed a graphical model based on the Ising model to systematically introduce errors to specific types of cross-channel information. The Ising-like model allows us to add errors while maintaining the important geometric information contained in cross-channel information, which is due to the spectro-temporal structure of speech. Results suggest the potential utility of leveraging cross-channel information to improve the performance of the reverberation mitigation algorithm from the baseline channel-based features, even when the cross-channel information contains errors.

15.
Clin EEG Neurosci ; 49(2): 114-121, 2018 Mar.
Article in English | MEDLINE | ID: mdl-29076357

ABSTRACT

The objective of this study was to investigate the performance of 3 brain-computer interface (BCI) paradigms in an amyotrophic lateral sclerosis (ALS) population (n = 11). Using a repeated-measures design, participants completed 3 BCI conditions: row/column (RCW), checkerboard (CBW), and gray-to-color (CBC). Based on previous studies, it is hypothesized that the CBC and CBW conditions will result in higher accuracy, information transfer rate, waveform amplitude, and user preference over the RCW condition. An offline dynamic stopping simulation will also increase information transfer rate. Higher mean accuracy was observed in the CBC condition (89.7%), followed by the CBW (84.3%) condition, and lowest in the RCW condition (78.7%); however, these differences did not reach statistical significance ( P = .062). Eight of the eleven participants preferred the CBC and the remaining three preferred the CBW conditions. The offline dynamic stopping simulation significantly increased information transfer rate ( P = .005) and decreased accuracy ( P < .000). The findings of this study suggest that color stimuli provide a modest improvement in performance and that participants prefer color stimuli over monochromatic stimuli. Given these findings, BCI paradigms that use color stimuli should be considered for individuals who have ALS.


Subject(s)
Amyotrophic Lateral Sclerosis/physiopathology , Brain-Computer Interfaces , Event-Related Potentials, P300/physiology , User-Computer Interface , Adult , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Photic Stimulation/methods
16.
J Neural Eng ; 14(5): 056010, 2017 10.
Article in English | MEDLINE | ID: mdl-28585523

ABSTRACT

OBJECTIVE: Various augmentative and alternative communication (AAC) devices have been developed in order to aid communication for individuals with communication disorders. Recently, there has been interest in combining EEG data and eye-gaze data with the goal of developing a hybrid (or 'fused') BCI (hBCI) AAC system. This work explores the effectiveness of a speller that fuses data from an eye-tracker and the P300 speller in order to create a hybrid P300 speller. APPROACH: This hybrid speller collects both eye-tracking and EEG data in parallel, and the user spells characters on the screen in the same way that they would if they were only using the P300 speller. Online and offline experiments were performed. The online experiments measured the performance of the speller for sixteen non-disabled participants, while the offline simulations were used to assess the robustness of the hybrid system. MAIN RESULTS: Online results showed that for fifteen non-disabled participants, using eye-gaze in a Bayesian framework with EEG data from the P300 speller improved accuracy ([Formula: see text], [Formula: see text], [Formula: see text] for estimated, medium and high variance configurations) and reduced the average number of flashes required to spell a character compared to the standard P300 speller that relies solely on EEG data ([Formula: see text], [Formula: see text], [Formula: see text] for estimated, medium and high variance configurations). Offline simulations indicate that the system provides more robust performance than a standalone eye gaze system. SIGNIFICANCE: The results of this work on non-disabled participants shows the potential efficacy of hybrid P300 and eye-tracker speller. Further validation on the amyotrophic lateral sceloris population is needed to assess the benefit of this hybrid system.


Subject(s)
Electroencephalography/methods , Event-Related Potentials, P300/physiology , Fixation, Ocular/physiology , Photic Stimulation/methods , Eye Movements/physiology , Humans , Statistics as Topic/methods
17.
Sci Data ; 3: 160106, 2016 12 06.
Article in English | MEDLINE | ID: mdl-27922592

ABSTRACT

Earth-observing remote sensing data, including aerial photography and satellite imagery, offer a snapshot of the world from which we can learn about the state of natural resources and the built environment. The components of energy systems that are visible from above can be automatically assessed with these remote sensing data when processed with machine learning methods. Here, we focus on the information gap in distributed solar photovoltaic (PV) arrays, of which there is limited public data on solar PV deployments at small geographic scales. We created a dataset of solar PV arrays to initiate and develop the process of automatically identifying solar PV locations using remote sensing imagery. This dataset contains the geospatial coordinates and border vertices for over 19,000 solar panels across 601 high-resolution images from four cities in California. Dataset applications include training object detection and other machine learning algorithms that use remote sensing imagery, developing specific algorithms for predictive detection of distributed PV systems, estimating installed PV capacity, and analysis of the socioeconomic correlates of PV deployment.

18.
Science ; 352(6283): aaf1015, 2016 Apr 15.
Article in English | MEDLINE | ID: mdl-27081075

ABSTRACT

The nuclear pore complex (NPC) controls the transport of macromolecules between the nucleus and cytoplasm, but its molecular architecture has thus far remained poorly defined. We biochemically reconstituted NPC core protomers and elucidated the underlying protein-protein interaction network. Flexible linker sequences, rather than interactions between the structured core scaffold nucleoporins, mediate the assembly of the inner ring complex and its attachment to the NPC coat. X-ray crystallographic analysis of these scaffold nucleoporins revealed the molecular details of their interactions with the flexible linker sequences and enabled construction of full-length atomic structures. By docking these structures into the cryoelectron tomographic reconstruction of the intact human NPC and validating their placement with our nucleoporin interactome, we built a composite structure of the NPC symmetric core that contains ~320,000 residues and accounts for ~56 megadaltons of the NPC's structured mass. Our approach provides a paradigm for the structure determination of similarly complex macromolecular assemblies.


Subject(s)
Nuclear Pore Complex Proteins/metabolism , Nuclear Pore/metabolism , Nuclear Pore/ultrastructure , Protein Interaction Maps , Active Transport, Cell Nucleus , Amino Acid Sequence , Cryoelectron Microscopy , Crystallography, X-Ray , Cytoplasm/metabolism , Electron Microscope Tomography , Fungal Proteins/chemistry , Fungal Proteins/genetics , Fungal Proteins/metabolism , Humans , Molecular Sequence Data , Nuclear Pore/chemistry , Nuclear Pore Complex Proteins/chemistry , Nuclear Pore Complex Proteins/genetics , Protein Structure, Tertiary , Protein Subunits/chemistry , Protein Subunits/genetics , Protein Subunits/metabolism
19.
Science ; 350(6256): 56-64, 2015 Oct 02.
Article in English | MEDLINE | ID: mdl-26316600

ABSTRACT

The nuclear pore complex (NPC) constitutes the sole gateway for bidirectional nucleocytoplasmic transport. We present the reconstitution and interdisciplinary analyses of the ~425-kilodalton inner ring complex (IRC), which forms the central transport channel and diffusion barrier of the NPC, revealing its interaction network and equimolar stoichiometry. The Nsp1•Nup49•Nup57 channel nucleoporin heterotrimer (CNT) attaches to the IRC solely through the adaptor nucleoporin Nic96. The CNT•Nic96 structure reveals that Nic96 functions as an assembly sensor that recognizes the three-dimensional architecture of the CNT, thereby mediating the incorporation of a defined CNT state into the NPC. We propose that the IRC adopts a relatively rigid scaffold that recruits the CNT to primarily form the diffusion barrier of the NPC, rather than enabling channel dilation.


Subject(s)
Chaetomium/ultrastructure , Fungal Proteins/ultrastructure , Nuclear Pore Complex Proteins/ultrastructure , Nuclear Pore/ultrastructure , Nuclear Proteins/ultrastructure , Amino Acid Sequence , Chaetomium/metabolism , Fungal Proteins/chemistry , Molecular Sequence Data , Nuclear Pore/metabolism , Nuclear Pore Complex Proteins/chemistry , Nuclear Proteins/chemistry , Protein Binding , Protein Multimerization , Protein Structure, Secondary , Protein Structure, Tertiary
20.
Epilepsy Behav ; 48: 79-82, 2015 Jul.
Article in English | MEDLINE | ID: mdl-26074344

ABSTRACT

We demonstrate evidence that high discriminability between preictal and interictal intracranial electroencephalogram (iEEG) recordings [1,2] of the Freiburg database (FSPEEG) may be due to the amount of time that occurred between recordings, as opposed to the underlying seizure state, i.e., preictal or interictal. After replicating published classification methods and results, we performed two experiments. In the first experiment, almost perfect discriminability between discontinuous interictal recordings and almost perfect discriminability between discontinuous preictal recordings were observed as the amount of time between recordings increased. Further, a second experiment demonstrated that the classification performance for patients with large time gaps between preictal and interictal recordings was noticeably higher than the classification performance for patients with contiguous preictal and interictal files. These results provide evidence that time likely plays a major role in the discriminability of the iEEG features considered in this study, regardless of the underlying seizure state. Feature nonstationarity is present and may, under certain conditions, lead to overestimation or underestimation of the probability of seizure occurrence.


Subject(s)
Electrocorticography/methods , Seizures/diagnosis , Seizures/physiopathology , Brain/physiopathology , Databases, Factual , Electroencephalography/methods , Humans , Male , Monitoring, Physiologic/methods , Predictive Value of Tests , Sensitivity and Specificity , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...