Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Artif Intell Med ; 154: 102921, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38991399

ABSTRACT

High-resolution cervical auscultation (HRCA) is an emerging noninvasive and accessible option to assess swallowing by relying upon accelerometry and sound sensors. HRCA has shown tremendous promise and accuracy in identifying and predicting swallowing physiology and biomechanics with accuracies equivalent to trained human judges. These insights have historically been available only through instrumental swallowing evaluation methods, such as videofluoroscopy and endoscopy. HRCA uses supervised learning techniques to interpret swallowing physiology from the acquired signals, which are collected during radiographic assessment of swallowing using barium contrast. Conversely, bedside swallowing screening is typically conducted in non-radiographic settings using only water. This poses a challenge to translating and generalizing HRCA algorithms to bedside screening due to the rheological differences between barium and water. To address this gap, we proposed a cross-domain transformation framework that uses cycle generative adversarial networks to convert HRCA signals of water swallows into a domain compatible with the barium swallows-trained HRCA algorithms. The proposed framework achieved a cross-domain transformation accuracy that surpassed 90%. The authenticity of the generated signals was confirmed using a binary classifier to confirm the framework's capability to produce indistinguishable signals. This framework was also assessed for retaining swallow physiological and biomechanical properties in the signals by applying an existing model from the literature that identifies the opening and closure of the upper esophageal sphincter. The outcomes of this model showed nearly identical results between the generated and original signals. These findings suggest that the proposed transformation framework is a feasible avenue to advance HCRA towards clinical deployment for water-based swallowing screenings.

2.
J Stud Alcohol Drugs ; 84(6): 808-813, 2023 11.
Article in English | MEDLINE | ID: mdl-37306378

ABSTRACT

OBJECTIVE: Devices such as mobile phones and smart speakers could be useful to remotely identify voice alterations associated with alcohol intoxication that could be used to deliver just-in-time interventions, but data to support such approaches for the English language are lacking. In this controlled laboratory study, we compare how well English spectrographic voice features identify alcohol intoxication. METHOD: A total of 18 participants (72% male, ages 21-62 years) read a randomly assigned tongue twister before drinking and each hour for up to 7 hours after drinking a weight-based dose of alcohol. Vocal segments were cleaned and split into 1-second windows. We built support vector machine models for detecting alcohol intoxication, defined as breath alcohol concentration > .08%, comparing the baseline voice spectrographic signature to each subsequent timepoint and examined accuracy with 95% confidence intervals (CIs). RESULTS: Alcohol intoxication was predicted with an accuracy of 98% (95% CI [97.1, 98.6]); mean sensitivity = .98; specificity = .97; positive predictive value = .97; and negative predictive value = .98. CONCLUSIONS: In this small, controlled laboratory study, voice spectrographic signatures collected from brief recorded English segments were useful in identifying alcohol intoxication. Larger studies using varied voice samples are needed to validate and expand models.


Subject(s)
Alcoholic Intoxication , Female , Humans , Male , Alcohol Drinking , Alcoholic Intoxication/diagnosis , Breath Tests , Ethanol
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 434-437, 2020 07.
Article in English | MEDLINE | ID: mdl-33018021

ABSTRACT

EEG signal classification is an important task to build an accurate Brain Computer Interface (BCI) system. Many machine learning and deep learning approaches have been used to classify EEG signals. Besides, many studies have involved the time and frequency domain features to classify EEG signals. On the other hand, a very limited number of studies combine the spatial and temporal dimensions of the EEG signal. Brain dynamics are very complex across different mental tasks, thus it is difficult to design efficient algorithms with features based on prior knowledge. Therefore, in this study, we utilized the 2D AlexNet Convolutional Neural Network (CNN) to learn EEG features across different mental tasks without prior knowledge. First, this study adds spatial and temporal dimensions of EEG signals to a 2D EEG topographic map. Second, topographic maps at different time indices were cascaded to populate a 2D image for a given time window. Finally, the topographic maps enabled the AlexNet to learn features from the spatial and temporal dimensions of the brain signals. The classification performance was obtained by the proposed method on a multiclass dataset from BCI Competition IV dataset 2a. The proposed system obtained an average classification accuracy of 81.09%, outperforming the previous state-of-the-art methods by a margin of 4% for the same dataset. The results showed that converting the EEG classification problem from a (1D) time series to a (2D) image classification problem improves the classification accuracy for BCI systems. Also, our EEG topographic maps enabled CNN to learn subtle features from spatial and temporal dimensions, which better represent mental tasks than individual time or frequency domain features.


Subject(s)
Brain-Computer Interfaces , Algorithms , Electroencephalography , Machine Learning , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...