Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Cluster Comput ; : 1-16, 2023 Jan 24.
Article in English | MEDLINE | ID: mdl-36712413

ABSTRACT

As a pandemic, the primary evaluation tool for coronavirus (COVID-19) still has serious flaws. To improve the existing situation, all facilities and tools available in this field should be used to combat the pandemic. Reverse transcription polymerase chain reaction is used to evaluate whether or not a person has this virus, but it cannot establish the severity of the illness. In this paper, we propose a simple, reliable, and automatic system to diagnose the severity of COVID-19 from the CT scans into three stages: mild, moderate, and severe, based on the simple segmentation method and three types of features extracted from the CT images, which are ratio of infection, statistical texture features (mean, standard deviation, skewness, and kurtosis), GLCM and GLRLM texture features. Four machine learning techniques (decision trees (DT), K-nearest neighbors (KNN), support vector machines (SVM), and Naïve Bayes) are used to classify scans. 1801 scans are divided into four stages based on the CT findings in the scans and the description file found with the datasets. Our proposed model divides into four steps: preprocessing, feature extraction, classification, and performance evaluation. Four machine learning algorithms are used in the classification step: SVM, KNN, DT, and Naive Bayes. By SVM method, the proposed model achieves 99.12%, 98.24%, 98.73%, and 99.9% accuracy for COVID-19 infection segmentation at the normal, mild, moderate, and severe stages, respectively. The area under the curve of the model is 0.99. Finally, our proposed model achieves better performance than state-of-art models. This will help the doctors know the stage of the infection and thus shorten the time and give the appropriate dose of treatment for this stage.

2.
Assist Technol ; 34(2): 129-139, 2022 03 04.
Article in English | MEDLINE | ID: mdl-31910146

ABSTRACT

There are over 466 million people in the world with disabling hearing loss. People with severe-to-profound hearing impairment need to lipread or use sign language, even with hearing aids. Assistive Technologies play a vital role in helping these people interact efficiently with their environment. Deaf drivers are not currently able to take full advantage of voice-based navigation applications. In this paper, we describe research that is aimed at developing an assistive device that (1) recognizes voice-stream navigation instructions from GPS-based navigation applications, and (2) maps each voiced navigation instruction to a vibrotactile stimulus that can be perceived and understood by deaf drivers. A 13-element feature vector is extracted from each voice stream, and classified into one of six categories, where each category represents a unique navigation instruction. The classification of the feature vectors is done using a K-Nearest-Neighbor classifier (with an accuracy of 99.05%) which was found to outperform five other classifiers. Each category is then mapped to a unique vibration pattern, which drives vibration motors in real time. A usability study was conducted with ten participants. Three different alternatives were tested, to find the best body locations for mounting the vibration motors. The solution ultimately chosen was two sets of five vibrator motors, where each set was mounted on a bracelet. Ten drivers were asked to rate the proposed device (based on eight different factors) after they used the assistive device on 8 driving routes. The overall mean rating across all eight factors was 4.67 (out of 5) This indicates that the proposed assistive device was seen as useful and effective.


Subject(s)
Automobile Driving , Hearing Aids , Hearing Loss , Self-Help Devices , Humans , Sign Language
3.
Artif Intell Med ; 112: 102018, 2021 02.
Article in English | MEDLINE | ID: mdl-33581830

ABSTRACT

BACKGROUND AND OBJECTIVE: The novel coronavirus disease 2019 (COVID-19) is considered a pandemic by the World Health Organization (WHO). As of April 3, 2020, there were 1,009,625 reported confirmed cases, and 51,737 reported deaths. Doctors have been faced with a myriad of patients who present with many different symptoms. This raises two important questions. What are the common symptoms, and what are their relative importance? METHODS: A non-structured and incomplete COVID-19 dataset of 14,251 confirmed cases was preprocessed. This produced a complete and organized COVID-19 dataset of 738 confirmed cases. Six different feature selection algorithms were then applied to this new dataset. Five of these algorithms have been proposed earlier in the literature. The sixth is a novel algorithm being proposed by the authors, called Variance Based Feature Weighting (VBFW), which not only ranks the symptoms (based on their importance) but also assigns a quantitative importance measure to each symptom. RESULTS: For our COVID-19 dataset, the five different feature selection algorithms provided different rankings for the most important top-five symptoms. They even selected different symptoms for inclusion within the top five. This is because each of the five algorithms ranks the symptoms based on different data characteristics. Each of these algorithms has advantages and disadvantages. However, when all these five rankings were aggregated (using two different aggregating methods) they produced two identical rankings of the five most important COVID-19 symptoms. Starting from the most important to least important, they were: Fever/Cough, Fatigue, Sore Throat, and Shortness of Breath. (Fever and cough were ranked equally in both aggregations.) Meanwhile, the sixth novel Variance Based Feature Weighting algorithm, chose the same top five symptoms, but ranked fever much higher than cough, based on its quantitative importance measures for each of those symptoms (Fever - 75 %, Cough - 39.8 %, Fatigue - 16.5 %, Sore Throat - 10.8 %, and Shortness of Breath - 6.6 %). Moreover, the proposed VBFW method achieved an accuracy of 92.1 % when used to build a one-class SVM model, and an NDCG@5 of 100 %. CONCLUSIONS: Based on the dataset, and the feature selection algorithms employed here, symptoms of Fever, Cough, Fatigue, Sore Throat and Shortness of Breath are important symptoms of COVID-19. The VBFW algorithm also indicates that Fever and Cough symptoms were especially indicative of COVID-19, for the confirmed cases that are documented in our database.


Subject(s)
COVID-19/physiopathology , Computational Biology/methods , Algorithms , COVID-19/epidemiology , COVID-19/virology , Cough/physiopathology , Dyspnea/physiopathology , Fatigue/physiopathology , Fever/physiopathology , Humans , Pandemics , Pharyngitis/physiopathology , SARS-CoV-2/isolation & purification
4.
Biomed Signal Process Control ; 62: 102149, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32834831

ABSTRACT

The world has been facing the challenge of COVID-19 since the end of 2019. It is expected that the world will need to battle the COVID-19 pandemic with precautious measures, until an effective vaccine is developed. This paper proposes a real-time COVID-19 detection and monitoring system. The proposed system would employ an Internet of Things (IoTs) framework to collect real-time symptom data from users to early identify suspected coronaviruses cases, to monitor the treatment response of those who have already recovered from the virus, and to understand the nature of the virus by collecting and analyzing relevant data. The framework consists of five main components: Symptom Data Collection and Uploading (using wearable sensors), Quarantine/Isolation Center, Data Analysis Center (that uses machine learning algorithms), Health Physicians, and Cloud Infrastructure. To quickly identify potential coronaviruses cases from this real-time symptom data, this work proposes eight machine learning algorithms, namely Support Vector Machine (SVM), Neural Network, Naïve Bayes, K-Nearest Neighbor (K-NN), Decision Table, Decision Stump, OneR, and ZeroR. An experiment was conducted to test these eight algorithms on a real COVID-19 symptom dataset, after selecting the relevant symptoms. The results show that five of these eight algorithms achieved an accuracy of more than 90 %. Based on these results we believe that real-time symptom data would allow these five algorithms to provide effective and accurate identification of potential cases of COVID-19, and the framework would then document the treatment response for each patient who has contracted the virus.

5.
Comput Methods Programs Biomed ; 188: 105301, 2020 May.
Article in English | MEDLINE | ID: mdl-31911333

ABSTRACT

BACKGROUND AND OBJECTIVE: Osteoporosis is a disease characterized by a decrease in bone density. It is often associated with fractures and severe pain. Previous studies have shown a high correlation between the density of the bone in the hip and in the mandibular bone in the jaw. This suggests that dental radiographs might be useful for detecting osteoporosis. Use of dental radiographs for this purpose would simplify early detection of osteoporosis. However, dental radiographs are not normally examined by radiologists. This paper explores the use of 13 different feature extractors for detection of reduced bone density in dental radiographs. METHODS: The computed feature vectors are then processed with a Self-Organizing Map and Learning Vector Quantization as well as Support Vector Machines to produce a set of 26 predictive models. RESULTS: The results show that the models based on Self-Organizing Map and Learning Vector Quantization using Gabor Filter, Edge Orientation Histogram, Haar Wavelet, and Steerable Filter feature extractors outperform the rest of the 22 models in detecting osteoporosis. The proposed Gabor-based algorithm achieved an accuracy of 92.6%, a sensitivity of 97.1%, and a specificity of 86.4%. CONCLUSIONS: The oriented edges and textures in the upper and lower jaw regions are useful for distinguishing normal patients from patients with osteoporosis.


Subject(s)
Bone Density , Diagnosis, Computer-Assisted , Mandibular Fractures/diagnostic imaging , Osteoporosis/classification , Osteoporosis/diagnostic imaging , Pattern Recognition, Automated , Radiography, Panoramic , Absorptiometry, Photon , Algorithms , False Positive Reactions , Humans , Image Processing, Computer-Assisted , Machine Learning , Osteoporotic Fractures/diagnostic imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Sensitivity and Specificity , Support Vector Machine , Wavelet Analysis
6.
Assist Technol ; 30(3): 119-132, 2018.
Article in English | MEDLINE | ID: mdl-28152342

ABSTRACT

Sign language can be used to facilitate communication with and between deaf or hard of hearing (Deaf/HH). With the advent of video streaming applications in smart TVs and mobile devices, it is now possible to use sign language to communicate over worldwide networks. In this article, we develop a prototype assistive device for real-time speech-to-sign translation. The proposed device aims at enabling Deaf/HH people to access and understand materials delivered in mobile streaming videos through the applications of pipelined and parallel processing for real-time translation, and the application of eye-tracking based user-satisfaction detection to support dynamic learning to improve speech-to-signing translation. We conduct two experiments to evaluate the performance and usability of the proposed assistive device. Nine deaf people participated in these experiments. Our real-time performance evaluation shows the addition of viewer's attention-based feedback reduced translation error rates by 16% (per the sign error rate [SER] metric) and increased translation accuracy by 5.4% (per the bilingual evaluation understudy [BLEU] metric) when compared to a non-real-time baseline system without these features. The usability study results indicate that our assistive device was also pleasant and satisfying to deaf users, and it may contribute to greater engagement of deaf people in day-to-day activities.


Subject(s)
Artificial Intelligence , Communication Aids for Disabled , Persons With Hearing Impairments/rehabilitation , Sign Language , Smartphone , Adolescent , Adult , Algorithms , Female , Humans , Internet , Male , Natural Language Processing , Patient Satisfaction , Video Recording , Young Adult
7.
Comput Biol Med ; 70: 139-147, 2016 Mar 01.
Article in English | MEDLINE | ID: mdl-26829706

ABSTRACT

The production and distribution of videos and animations on gaming and self-authoring websites are booming. However, given this rise in self-authoring, there is increased concern for the health and safety of people who suffer from a neurological disorder called photosensitivity or photosensitive epilepsy. These people can suffer seizures from viewing video with hazardous content. This paper presents a spatiotemporal pattern detection algorithm that can detect hazardous content in streaming video in real time. A tool is developed for producing test videos with hazardous content, and then those test videos are used to evaluate the proposed algorithm, as well as an existing post-processing tool that is currently being used for detecting such patterns. To perform the detection in real time, the proposed algorithm was implemented on a dual core processor, using a pipelined/parallel software architecture. Results indicate that the proposed method provides better detection performance, allowing for the masking of seizure inducing patterns in real time.


Subject(s)
Algorithms , Epilepsy, Reflex/diagnosis , Epilepsy, Reflex/physiopathology , Diagnosis, Differential , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...