Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(16)2023 Aug 14.
Article in English | MEDLINE | ID: mdl-37631693

ABSTRACT

Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face-hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron "SelfMLP" is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face-hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.


Subject(s)
Deep Learning , Humans , Language , Sign Language , Communication , Recognition, Psychology
2.
Sensors (Basel) ; 22(9)2022 May 05.
Article in English | MEDLINE | ID: mdl-35591196

ABSTRACT

Diabetic neuropathy (DN) is one of the prevalent forms of neuropathy that involves alterations in biomechanical changes in the human gait. Diabetic foot ulceration (DFU) is one of the pervasive types of complications that arise due to DN. In the literature, for the last 50 years, researchers have been trying to observe the biomechanical changes due to DN and DFU by studying muscle electromyography (EMG) and ground reaction forces (GRF). However, the literature is contradictory. In such a scenario, we propose using Machine learning techniques to identify DN and DFU patients by using EMG and GRF data. We collected a dataset from the literature which involves three patient groups: Control (n = 6), DN (n = 6), and previous history of DFU (n = 9) and collected three lower limb muscles EMG (tibialis anterior (TA), vastus lateralis (VL), gastrocnemius lateralis (GL)), and three GRF components (GRFx, GRFy, and GRFz). Raw EMG and GRF signals were preprocessed, and different feature extraction techniques were applied to extract the best features from the signals. The extracted feature list was ranked using four different feature ranking techniques, and highly correlated features were removed. In this study, we considered different combinations of muscles and GRF components to find the best performing feature list for the identification of DN and DFU. We trained eight different conventional ML models: Discriminant analysis classifier (DAC), Ensemble classification model (ECM), Kernel classification model (KCM), k-nearest neighbor model (KNN), Linear classification model (LCM), Naive Bayes classifier (NBC), Support vector machine classifier (SVM), and Binary decision classification tree (BDC), to find the best-performing algorithm and optimized that model. We trained the optimized the ML algorithm for different combinations of muscles and GRF component features, and the performance matrix was evaluated. Our study found the KNN algorithm performed well in identifying DN and DFU, and we optimized it before training. We found the best accuracy of 96.18% for EMG analysis using the top 22 features from the chi-square feature ranking technique for features from GL and VL muscles combined. In the GRF analysis, the model showed 98.68% accuracy using the top 7 features from the Feature selection using neighborhood component analysis for the feature combinations from the GRFx-GRFz signal. In conclusion, our study has shown a potential solution for ML application in DN and DFU patient identification using EMG and GRF parameters. With careful signal preprocessing with strategic feature extraction from the biomechanical parameters, optimization of the ML model can provide a potential solution in the diagnosis and stratification of DN and DFU patients from the EMG and GRF signals.


Subject(s)
Diabetes Mellitus , Diabetic Foot , Diabetic Neuropathies , Algorithms , Bayes Theorem , Diabetic Foot/diagnosis , Diabetic Neuropathies/diagnosis , Electromyography/methods , Gait/physiology , Humans , Machine Learning , Support Vector Machine
3.
Diagnostics (Basel) ; 12(4)2022 Apr 07.
Article in English | MEDLINE | ID: mdl-35453968

ABSTRACT

Problem-Since the outbreak of the COVID-19 pandemic, mass testing has become essential to reduce the spread of the virus. Several recent studies suggest that a significant number of COVID-19 patients display no physical symptoms whatsoever. Therefore, it is unlikely that these patients will undergo COVID-19 testing, which increases their chances of unintentionally spreading the virus. Currently, the primary diagnostic tool to detect COVID-19 is a reverse-transcription polymerase chain reaction (RT-PCR) test from the respiratory specimens of the suspected patient, which is invasive and a resource-dependent technique. It is evident from recent researches that asymptomatic COVID-19 patients cough and breathe in a different way than healthy people. Aim-This paper aims to use a novel machine learning approach to detect COVID-19 (symptomatic and asymptomatic) patients from the convenience of their homes so that they do not overburden the healthcare system and also do not spread the virus unknowingly by continuously monitoring themselves. Method-A Cambridge University research group shared such a dataset of cough and breath sound samples from 582 healthy and 141 COVID-19 patients. Among the COVID-19 patients, 87 were asymptomatic while 54 were symptomatic (had a dry or wet cough). In addition to the available dataset, the proposed work deployed a real-time deep learning-based backend server with a web application to crowdsource cough and breath datasets and also screen for COVID-19 infection from the comfort of the user's home. The collected dataset includes data from 245 healthy individuals and 78 asymptomatic and 18 symptomatic COVID-19 patients. Users can simply use the application from any web browser without installation and enter their symptoms, record audio clips of their cough and breath sounds, and upload the data anonymously. Two different pipelines for screening were developed based on the symptoms reported by the users: asymptomatic and symptomatic. An innovative and novel stacking CNN model was developed using three base learners from of eight state-of-the-art deep learning CNN algorithms. The stacking CNN model is based on a logistic regression classifier meta-learner that uses the spectrograms generated from the breath and cough sounds of symptomatic and asymptomatic patients as input using the combined (Cambridge and collected) dataset. Results-The stacking model outperformed the other eight CNN networks with the best classification performance for binary classification using cough sound spectrogram images. The accuracy, sensitivity, and specificity for symptomatic and asymptomatic patients were 96.5%, 96.42%, and 95.47% and 98.85%, 97.01%, and 99.6%, respectively. For breath sound spectrogram images, the metrics for binary classification of symptomatic and asymptomatic patients were 91.03%, 88.9%, and 91.5% and 80.01%, 72.04%, and 82.67%, respectively. Conclusion-The web-application QUCoughScope records coughing and breathing sounds, converts them to a spectrogram, and applies the best-performing machine learning model to classify the COVID-19 patients and healthy subjects. The result is then reported back to the test user in the application interface. Therefore, this novel system can be used by patients in their premises as a pre-screening method to aid COVID-19 diagnosis by prioritizing the patients for RT-PCR testing and thereby reducing the risk of spreading of the disease.

4.
Comput Biol Med ; 139: 105002, 2021 12.
Article in English | MEDLINE | ID: mdl-34749094

ABSTRACT

The immense spread of coronavirus disease 2019 (COVID-19) has left healthcare systems incapable to diagnose and test patients at the required rate. Given the effects of COVID-19 on pulmonary tissues, chest radiographic imaging has become a necessity for screening and monitoring the disease. Numerous studies have proposed Deep Learning approaches for the automatic diagnosis of COVID-19. Although these methods achieved outstanding performance in detection, they have used limited chest X-ray (CXR) repositories for evaluation, usually with a few hundred COVID-19 CXR images only. Thus, such data scarcity prevents reliable evaluation of Deep Learning models with the potential of overfitting. In addition, most studies showed no or limited capability in infection localization and severity grading of COVID-19 pneumonia. In this study, we address this urgent need by proposing a systematic and unified approach for lung segmentation and COVID-19 localization with infection quantification from CXR images. To accomplish this, we have constructed the largest benchmark dataset with 33,920 CXR images, including 11,956 COVID-19 samples, where the annotation of ground-truth lung segmentation masks is performed on CXRs by an elegant human-machine collaborative approach. An extensive set of experiments was performed using the state-of-the-art segmentation networks, U-Net, U-Net++, and Feature Pyramid Networks (FPN). The developed network, after an iterative process, reached a superior performance for lung region segmentation with Intersection over Union (IoU) of 96.11% and Dice Similarity Coefficient (DSC) of 97.99%. Furthermore, COVID-19 infections of various shapes and types were reliably localized with 83.05% IoU and 88.21% DSC. Finally, the proposed approach has achieved an outstanding COVID-19 detection performance with both sensitivity and specificity values above 99%.


Subject(s)
COVID-19 , Humans , Lung/diagnostic imaging , SARS-CoV-2 , Thorax , X-Rays
SELECTION OF CITATIONS
SEARCH DETAIL
...