Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 579
Filter
Add filters

Year range
1.
Emerging Science Journal ; JOUR(Special Issue):1-16, 7.
Article in English | Scopus | ID: covidwho-2091528

ABSTRACT

The COVID-19 pandemic has created a worldwide healthcare crisis. Convolutional Neural Networks (CNNs) have recently been used with encouraging results to help detect COVID-19 from chest X-ray images. However, to generalize well to unseen data, CNNs require large labeled datasets. Due to the lack of publicly available COVID-19 datasets, most CNNs apply various data augmentation techniques during training. However, there has not been a thorough statistical analysis of how data augmentation operations affect classification performance for COVID-19 detection. In this study, a fractional factorial experimental design is used to examine the impact of basic augmentation methods on COVID-19 detection. The latter enables identifying which particular data augmentation techniques and interactions have a statistically significant impact on the classification performance, whether positively or negatively. Using the CoroNet architecture and two publicly available COVID-19 datasets, the most common basic augmentation methods in the literature are evaluated. The results of the experiments demonstrate that the methods of zoom, range, and height shift positively impact the model's accuracy in dataset 1. The performance of dataset 2 is unaffected by any of the data augmentation operations. Additionally, a new state-of-the-art performance is achieved on both datasets by training CoroNet with the ideal data augmentation values found using the experimental design. Specifically, in dataset 1, 97% accuracy, 93% precision, and 97.7% recall were attained, while in dataset 2, 97% accuracy, 97% precision, and 97.6% recall were achieved. These results indicate that analyzing the effects of data augmentations on a particular task and dataset is essential for the best performance. © 2023 by the authors. Licensee ESJ, Italy.

2.
Chemometr Intell Lab Syst ; 231: 104695, 2022 Dec 15.
Article in English | MEDLINE | ID: covidwho-2082386

ABSTRACT

This paper aims to diagnose COVID-19 by using Chest X-Ray (CXR) scan images in a deep learning-based system. First of all, COVID-19 Chest X-Ray Dataset is used to segment the lung parts in CXR images semantically. DeepLabV3+ architecture is trained by using the masks of the lung parts in this dataset. The trained architecture is then fed with images in the COVID-19 Radiography Database. In order to improve the output images, some image preprocessing steps are applied. As a result, lung regions are successfully segmented from CXR images. The next step is feature extraction and classification. While features are extracted with modified AlexNet (mAlexNet), Support Vector Machine (SVM) is used for classification. As a result, 3-class data consisting of Normal, Viral Pneumonia and COVID-19 class are classified with 99.8% success. Classification results show that the proposed method is superior to previous state-of-the-art methods.

3.
Sensors (Basel) ; 22(20)2022 Oct 19.
Article in English | MEDLINE | ID: covidwho-2082155

ABSTRACT

COVID-19 has infected millions of people worldwide over the past few years. The main technique used for COVID-19 detection is reverse transcription, which is expensive, sensitive, and requires medical expertise. X-ray imaging is an alternative and more accessible technique. This study aimed to improve detection accuracy to create a computer-aided diagnostic tool. Combining other artificial intelligence applications techniques with radiological imaging can help detect different diseases. This study proposes a technique for the automatic detection of COVID-19 and other chest-related diseases using digital chest X-ray images of suspected patients by applying transfer learning (TL) algorithms. For this purpose, two balanced datasets, Dataset-1 and Dataset-2, were created by combining four public databases and collecting images from recently published articles. Dataset-1 consisted of 6000 chest X-ray images with 1500 for each class. Dataset-2 consisted of 7200 images with 1200 for each class. To train and test the model, TL with nine pretrained convolutional neural networks (CNNs) was used with augmentation as a preprocessing method. The network was trained to classify using five classifiers: two-class classifier (normal and COVID-19); three-class classifier (normal, COVID-19, and viral pneumonia), four-class classifier (normal, viral pneumonia, COVID-19, and tuberculosis (Tb)), five-class classifier (normal, bacterial pneumonia, COVID-19, Tb, and pneumothorax), and six-class classifier (normal, bacterial pneumonia, COVID-19, viral pneumonia, Tb, and pneumothorax). For two, three, four, five, and six classes, our model achieved a maximum accuracy of 99.83, 98.11, 97.00, 94.66, and 87.29%, respectively.


Subject(s)
COVID-19 , Deep Learning , Pneumonia, Bacterial , Pneumonia, Viral , Pneumothorax , Humans , COVID-19/diagnosis , SARS-CoV-2 , Artificial Intelligence
4.
Bioengineering (Basel) ; 9(10)2022 Oct 16.
Article in English | MEDLINE | ID: covidwho-2071199

ABSTRACT

Respiratory ailments are a very serious health issue and can be life-threatening, especially for patients with COVID. Respiration rate (RR) is a very important vital health indicator for patients. Any abnormality in this metric indicates a deterioration in health. Hence, continuous monitoring of RR can act as an early indicator. Despite that, RR monitoring equipment is generally provided only to intensive care unit (ICU) patients. Recent studies have established the feasibility of using photoplethysmogram (PPG) signals to estimate RR. This paper proposes a deep-learning-based end-to-end solution for estimating RR directly from the PPG signal. The system was evaluated on two popular public datasets: VORTAL and BIDMC. A lightweight model, ConvMixer, outperformed all of the other deep neural networks. The model provided a root mean squared error (RMSE), mean absolute error (MAE), and correlation coefficient (R) of 1.75 breaths per minute (bpm), 1.27 bpm, and 0.92, respectively, for VORTAL, while these metrics were 1.20 bpm, 0.77 bpm, and 0.92, respectively, for BIDMC. The authors also showed how fine-tuning a small subset could increase the performance of the model in the case of an out-of-distribution dataset. In the fine-tuning experiments, the models produced an average R of 0.81. Hence, this lightweight model can be deployed to mobile devices for real-time monitoring of patients.

5.
International Journal of Noncommunicable Diseases ; 6(5):69-75, 2021.
Article in English | Web of Science | ID: covidwho-2071983

ABSTRACT

Context: Efficiently diagnosing COVID-19-related pneumonia is of high clinical relevance. Point-of-care ultrasound allows detecting lung conditions via patterns of artifacts, such as clustered B-lines. Aims: The aim is to classify lung ultrasound videos into three categories: Normal (containing A-lines), interstitial abnormalities (B-lines), and confluent abnormalities (pleural effusion/consolidations) using a semi-automated approach. Settings and Design: This was a prospective observational study using 1530 videos in 300 patients presenting with clinical suspicion of COVID-19 pneumonia, where the data were collected and labeled by human experts versus machine learning. Subjects and Methods: Experts labeled each of the videos into one of the three categories. The labels were used to train a neural network to automatically perform the same classification. The proposed neural network uses a unique two-stream approach, one based on raw red-green-blue channel (RGB) input and the other consisting of velocity information. In this manner, both spatial and temporal ultrasound features can be captured. Statistical Analysis Used: A 5-fold cross-validation approach was utilized for the evaluation. Cohen's kappa and Gwet's AC1 metrics are calculated to measure the agreement with the human rater for the three categories. Cases are also divided into interstitial abnormalities (B-lines) and other (A-lines and confluent abnormalities) and precision-recall and receiver operating curve curves created. Results: This study demonstrated robustness in determining interstitial abnormalities, with a high F1 score of 0.86. For the human rater agreement for interstitial abnormalities versus the rest, the proposed method obtained a Gwet's AC1 metric of 0.88. Conclusions: The study demonstrates the use of a deep learning approach to classify artifacts contained in lung ultrasound videos in a robust manner.

6.
Ieee Access ; 10:98724-98736, 2022.
Article in English | Web of Science | ID: covidwho-2070263

ABSTRACT

During the COVID-19 pandemic, the spread of fake news became easy due to the wide use of social media platforms. Considering the problematic consequences of fake news, efforts have been made for the timely detection of fake news using machine learning and deep learning models. Such works focus on model optimization and feature engineering and the extraction part is under-explored area. Therefore, the primary objective of this study is to investigate the impact of features to obtain high performance. For this purpose, this study analyzes the impact of different subset feature selection techniques on the performance of models for fake news detection. Principal component analysis and Chi-square are investigated for feature selection using machine learning and pre-trained deep learning models. Additionally, the influence of different preprocessing steps is also analyzed regarding fake news detection. Results obtained from comprehensive experiments reveal that the extra tree classifier outperforms with a 0.9474 accuracy when trained on the combination of term frequency-inverse document frequency and bag of words features. Models tend to yield poor results if no preprocessing or partial processing is carried out. Convolutional neural network, long short term memory network, residual neural network (ResNet), and InceptionV3 show marginally lower performance than the extra tree classifier. Results reveal that using subset features also helps to achieve robustness for machine learning models.

7.
23rd IEEE International Conference on Information Reuse and Integration for Data Science, IRI 2022 ; : 303-308, 2022.
Article in English | Scopus | ID: covidwho-2063272

ABSTRACT

COVID-19 is a lethal viral disease that attacks the respiratory system. This contagious disease started spreading all around the world in December 2019. A Computerized chest Tomography (CT) scan is a trusted and recommended imaging tool to detect the COVID-19. Although manual CT image examination is an option, it takes significant time to get analyzed by a technician. Automating this process can be done by deep Convolutional Neural Networks (CNN). Applying these networks in analyzing the CT images could result in great success. There are several works focused on detecting COVID-19 by applying CNN uses different algorithms to classify COVID-19 patients from normal or pneumonia patients. The proposed models in these works mostly use limited and small data sets that could lead to generalization issues or biased predictions. In this paper we explore three methods for training a classifier for COVID-19 detection tasks using a large-scale public data set. In our first method, we only rely on CT images with training the CNN models. In the second method, we use pre-trained CNN models for image feature extraction and use those features for training classical machine learning models. In the last method, we propose an end-to-end model that gets both image and the metadata such as age and gender to experiment with the impact of metadata on COVID-19 detection task. We conclude that adding metadata improves the accuracy. © 2022 IEEE.

8.
2nd International Conference on Computing and Machine Intelligence, ICMI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2063260

ABSTRACT

COVID-19 is contagious virus that first emerged in China in 2019's last month. It mainly infects the both the lungs and the respiratory system. The virus has severely impacted life and the economy, which exposed threats to governments worldwide to manage it. Early diagnosis of COVID-19 could help with treatment planning and disease prevention strategies. In this study, we use CT-Scanned images of the lungs to show how COVID-19 may be identified using transfer learning model and investigate which model achieved the best and fastest results. Our primary focus was to detect structural anomalies to distinguish among COVID-19 positive, negative, and normal cases with deep learning methods. Every model received training with and without transfer learning and results were compared for various versions of DenseNet and EfficientNet. Optimal results were obtained using DenseNet201 (99.75%). When transfer learning was applied, all models produced almost similar results. © 2022 IEEE.

9.
10th IEEE International Conference on Healthcare Informatics, ICHI 2022 ; : 143-150, 2022.
Article in English | Scopus | ID: covidwho-2063248

ABSTRACT

Coughing is a cardinal symptom of pulmonary and respiratory diseases, such as asthma, tuberculosis, chronic obstructive pulmonary disease, as well as coronavirus disease (COVID-19). The cough type, strength and frequency are indicators of the disease progression. Thus, several studies focused on the quantitative reporting of coughs through recording by a smart-phone and a sound classifier to provide a cough diary for a patient. However, those approaches report any cough, even coughs which are not caused by the patient. Thus, in this study, we aim to not only detect cough episodes, but also cough events and account coughs produced by the particular patient only. Accordingly, we report on an end-to-end solution for a patient cough diary consisting of three convolutional neural networks. The first recognizes respiratory sounds, including coughing by multi-class classification. The second validates if the cough was produced by the patient. It is based on a Siamese network using triplet-loss during training. Finally, individual cough events are detected by a cough onset classifier. For these three recognition models, we achieved an accuracy of 94%, 74%, and 94%, respectively. Furthermore, we explored the human-level performance of cough source validation through a field experiment involving 10 subjects. Our source validation model slightly outperformed the human cohort in the cough memorization task. © 2022 IEEE.

10.
2022 IEEE International Conference on Electrical, Computer, and Energy Technologies, ICECET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2063238

ABSTRACT

The novel Coronavirus spread in the world in December 2019. Millions of people are infected due to this disease. Due to viral illness, daily life routines and the economy are affected in many countries. According to a clinical study, the disease directly attacks the lungs and disturbs the respiratory system. X-ray and CT scans are the main imaging techniques to discover that disease. However, X-ray scans cost is low as comparatively CT scans. In the limited resources, deep learning plays a key role in diagnosing the COVID-19 with the help of X-ray scans. This study proposed a new transfer learning approach based on the convolutional neural network (CNN). We used the four different classes during the experimental process: COVID-19, pneumonia, lung opacity, and viral pneumonia. We also compared our proposed model with other transfer learning-based techniques. Our proposed COVID-TL model attained the best results in terms of classification. The proposed model is a beneficial tool for radiologists to get the early diagnosis results and help the patients in their early stages. © 2022 IEEE.

11.
2022 IEEE International Conference on Electrical, Computer, and Energy Technologies, ICECET 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2063237

ABSTRACT

At present, Covid-19 is posing serious intimidation to students, doctors, scientists and governments all around the world. It is a single-stranded RNA virus with one of the enormous RNA genomes, and it is constantly changing through mutation. Sometimes this mutation results in a new variant. Research showed that people who come in touch with this virus mostly they are infected with lung illness. So, recognizing Covid-19 from a Chest X-ray is one of the best imaging techniques. But another issue arises when it shows that other diseases like viral pneumonia, lung opacity are also had common symptoms like as Covid-19 and these problems also can be detected from chest X-ray images. So, in this research, we proposed a deep learning approach combining Modified Convolutional Neural Network (M-CNN) and Bidirectional LSTM (BiLSTM) with an Multi-Support Vector Machine (M-SVM) classifier for detecting Covid-19, Viral Pneumonia, Lung-Opacity and normal chest. We used the COVID-19-Radiography-Dataset to assess the results of our proposed system and compared the result with some other existing systems which show our proposed system is better than others. The accuracy of classification using the proposed method is 98.67%. © 2022 IEEE.

12.
3rd International Workshop of Advances in Simplifying Medical Ultrasound, ASMUS 2022, held in Conjunction with 25th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2022 ; 13565 LNCS:23-33, 2022.
Article in English | Scopus | ID: covidwho-2059734

ABSTRACT

The need for summarizing long medical scan videos for automatic triage in Emergency Departments and transmission of the summarized videos for telemedicine has gained significance during the COVID-19 pandemic. However, supervised learning schemes for summarizing videos are infeasible as manual labeling of scans for large datasets is impractical by frontline clinicians. This work presents a methodology to summarize ultrasound videos using completely unsupervised learning schemes and is validated on Lung Ultrasound videos. A Convolutional Autoencoder and a Transformer decoder is trained in an unsupervised reinforcement learning setup i.e., without supervised labels in the whole workflow. Novel precision and recall computation for ultrasound videos is also presented employing which high Precision and F1 scores of 64.36% and 35.87% with an average video compression rate of 78% is obtained when validated against clinically annotated cases. Even though demonstrated using lung ultrasound videos, our approach can be readily extended to other imaging modalities. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

13.
25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 ; 13431 LNCS:560-570, 2022.
Article in English | EuropePMC | ID: covidwho-2059726

ABSTRACT

The COVID-19 pandemic has threatened global health. Many studies have applied deep convolutional neural networks (CNN) to recognize COVID-19 based on chest 3D computed tomography (CT). Recent works show that no model generalizes well across CT datasets from different countries, and manually designing models for specific datasets requires expertise;thus, neural architecture search (NAS) that aims to search models automatically has become an attractive solution. To reduce the search cost on large 3D CT datasets, most NAS-based works use the weight-sharing (WS) strategy to make all models share weights within a supernet;however, WS inevitably incurs search instability, leading to inaccurate model estimation. In this work, we propose an efficient Evolutionary Multi-objective ARchitecture Search (EMARS) framework. We propose a new objective, namely potential, which can help exploit promising models to indirectly reduce the number of models involved in weights training, thus alleviating search instability. We demonstrate that under objectives of accuracy and potential, EMARS can balance exploitation and exploration, i.e., reducing search time and finding better models. Our searched models are small and perform better than prior works on three public COVID-19 3D CT datasets. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

14.
25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022 ; 13431 LNCS:506-516, 2022.
Article in English | Scopus | ID: covidwho-2059725

ABSTRACT

Detailed modeling of the airway tree from CT scan is important for 3D navigation involved in endobronchial intervention including for those patients infected with the novel coronavirus. Deep learning methods have the potential for automatic airway segmentation but require large annotated datasets for training, which is difficult for a small patient population and rare cases. Due to the unique attributes of noisy COVID-19 CTs (e.g., ground-glass opacity and consolidation), vanilla 3D Convolutional Neural Networks (CNNs) trained on clean CTs are difficult to be generalized to noisy CTs. In this work, a Collaborative Feature Disentanglement and Augmentation framework (CFDA) is proposed to harness the intrinsic topological knowledge of the airway tree from clean CTs incorporated with unique bias features extracted from the noisy CTs. Firstly, we utilize the clean CT scans and a small amount of labeled noisy CT scans to jointly acquire a bias-discriminative encoder. Feature-level augmentation is then designed to perform feature sharing and augmentation, which diversifies the training samples and increases the generalization ability. Detailed evaluation results on patient datasets demonstrated considerable improvements in the CFDA network. It has been shown that the proposed method achieves superior segmentation performance of airway in COVID-19 CTs against other state-of-the-art transfer learning methods. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

15.
7th Workshop on Noisy User-Generated Text, W-NUT 2021 ; : 11-19, 2021.
Article in English | Scopus | ID: covidwho-2058276

ABSTRACT

Finding informative COVID-19 posts in a stream of tweets is very useful to monitor health-related updates. Prior work focused on a balanced data setup and on English, but informative tweets are rare, and English is only one of the many languages spoken in the world. In this work, we introduce a new dataset of 5,000 tweets for finding informative COVID-19 tweets for Danish. In contrast to prior work, which balances the label distribution, we model the problem by keeping its natural distribution. We examine how well a simple probabilistic model and a convolutional neural network (CNN) perform on this task. We find a weighted CNN to work well but it is sensitive to embedding and hyperparameter choices. We hope the contributed dataset is a starting point for further work in this direction. © 2021 Association for Computational Linguistics.

16.
2022 IEEE Region 10 Symposium, TENSYMP 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2052092

ABSTRACT

Deep Learning, especially Convolutional Neural Net-works (CNN) have been performing very well for the last decade in medical image classification. CNN has already shown a great prospect in detecting COVID-19 from chest X-ray images. However, due to its three dimensional data, chest CT scan images can provide better understanding of the affected area through segmentation in comparison to the chest X-ray images. But the chest CT scan images have not been explored enough to achieve sufficiently good results in comparison to the X-ray images. However, with proper image pre-processing, fine tuning, and optimization of the models better results can be achieved. This work aims in contributing to filling this void in the literature. On this aspect, this work explores and designs both custom CNN model and three other models based on transfer learning: InceptionV3, ResNet50, and VGG19. The best performing model is VGG19 with an accuracy of 98.39% and F-1 score of 98.52%. The main contribution of this work includes: (i) modeling a custom CNN model and three pre-trained models based on InceptionV3, ResNet50, and VGG19 (ii) training and validating the models with a comparatively larger dataset of 1252 COVID-19 and 1230 non-COVID CT images (iii) fine tune and optimize the designed models based on the parameters like number of dense layers, optimizer, learning rate, batch size, decay rate, and activation functions to achieve better results than the most of the state-of-the-art literature (iv) the designed models are made public in [1] for reproducibility by the research community for further developments and improvements. © 2022 IEEE.

17.
2022 IEEE Region 10 Symposium, TENSYMP 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2052085

ABSTRACT

The healthcare sector plays a significant role in the industry, where a client looks for the highest amount of care and services, no matter the cost. However, this sector has not satisfied society's presumption, even if this industry consumes a considerable percentage of the national budget. In the past, medical experts have been looking for smart medical solutions. This work focuses on accurate and early detection of illness from various medical images. Early detection not only aids in the development of better medications but can also save a life in the long run. Deep learning provides an excellent solution for early medical imaging in healthcare. This paper proposed a Stacked-based BiLSTM with Resnet50 Model using an AdaSwarm optimizer to classify and analyze the medical illnesses from the different medical image datasets. For this study, four medical datasets were used as benchmarks: Covid19, Pneumonia, Ma, and Lung Cancer. Accuracy, AUC, ROC, and F1 Score performance metrics are used to evaluate the prosed model from other models. The proposed model gives a mean ACCURACY, AUC, ROC, and F1 Score on these four datasets are 98%, 99%, 97%, and 98%, respectively. © 2022 IEEE.

18.
30th Signal Processing and Communications Applications Conference, SIU 2022 ; 2022.
Article in Turkish | Scopus | ID: covidwho-2052078

ABSTRACT

During the Covid 19 pandemic, the number of online learning environments that were previously used but not widespread has increased. Machine learning methods and estimation and classification studies of student success on learning analytics data in these environments have gained importance in recent years. In this study, a method based on OHE (one-hot encoding) representation of course activities, feature selection, and convolutional neural network is proposed for the classification of student success. In order to demonstrate the effectiveness of the proposed method, comparative evaluations were presented with incoming machine learning algorithms (RF, MLP, k-NN) and literature. Experiments on the UK Open University online learning dataset, which is available to researchers, show that the proposed method improves current study success in the literature. © 2022 IEEE.

19.
23rd International Seminar on Intelligent Technology and Its Applications, ISITIA 2022 ; : 86-91, 2022.
Article in English | Scopus | ID: covidwho-2052044

ABSTRACT

Globally, the pandemic of the coronavirus disease (COVID-19) is spreading quickly. Inadequate handling of contaminated garbage and waste management can unintentionally transmit the virus within the company. the complete spectrum from waste generation to treatment must be re-evaluated to scale back the socio-economic and environmental impacts of waste and help achieve a sustainable society. In the area of computer vision, deep learning is beginning to demonstrate high efficiency and minimal complexity. However, the problem now is the performance of the various CNN architectures with transfer learning compared to the classification of medical waste images. Using data augmentation, and preprocessing before performing the two-stage classification of medical waste classification. The research obtained an accuracy of 99.40%, a sensitivity of 98.18%, and a specificity of 100% without overfitting. © 2022 IEEE.

20.
4th IEEE International Conference on Power, Intelligent Computing and Systems, ICPICS 2022 ; : 84-89, 2022.
Article in English | Scopus | ID: covidwho-2052018

ABSTRACT

Facial age estimation is one of the most important tasks in the field of face recognition and recommendation system. Since the COVID-19 pandemic, people have been required to wear masks, which can be a challenge for traditional recognition methods. In this paper, an improved convolutional neural network architecture based on MobileNet is proposed to perform age estimation. For the challenge of masked faces, an innovative mask generation method using face keypoint detection is adopted, extracting the key points of the faces in order to add synthetic masks to simulate the real situations. Then we compare the estimation results of the original images and the synthetic images. Our method is applied to the WIKI Face dataset containing more than 150,000 images, and achieves MAE of 3.79 and 6.54 on unmasked and masked faces, respectively, which demonstrates the effectiveness of the proposed model. © 2022 IEEE.

SELECTION OF CITATIONS
SEARCH DETAIL