Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 687
Filter
1.
Scand J Trauma Resusc Emerg Med ; 29(1): 145, 2021 Oct 03.
Article in English | MEDLINE | ID: covidwho-2098399

ABSTRACT

BACKGROUND: Sepsis is a life-threatening organ dysfunction and a major healthcare burden worldwide. Although sepsis is a medical emergency that requires immediate management, screening for the occurrence of sepsis is difficult. Herein, we propose a deep learning-based model (DLM) for screening sepsis using electrocardiography (ECG). METHODS: This retrospective cohort study included 46,017 patients who were admitted to two hospitals. A total of 1,548 and 639 patients had sepsis and septic shock, respectively. The DLM was developed using 73,727 ECGs from 18,142 patients, and internal validation was conducted using 7774 ECGs from 7,774 patients. Furthermore, we conducted an external validation with 20,101 ECGs from 20,101 patients from another hospital to verify the applicability of the DLM across centers. RESULTS: During the internal and external validations, the area under the receiver operating characteristic curve (AUC) of the DLM using 12-lead ECG was 0.901 (95% confidence interval, 0.882-0.920) and 0.863 (0.846-0.879), respectively, for screening sepsis and 0.906 (95% confidence interval (CI), 0.877-0.936) and 0.899 (95% CI, 0.872-0.925), respectively, for detecting septic shock. The AUC of the DLM for detecting sepsis using 6-lead and single-lead ECGs was 0.845-0.882. A sensitivity map revealed that the QRS complex and T waves were associated with sepsis. Subgroup analysis was conducted using ECGs from 4,609 patients who were admitted with an infectious disease, and the AUC of the DLM for predicting in-hospital mortality was 0.817 (0.793-0.840). There was a significant difference in the prediction score of DLM using ECG according to the presence of infection in the validation dataset (0.277 vs. 0.574, p < 0.001), including severe acute respiratory syndrome coronavirus 2 (0.260 vs. 0.725, p = 0.018). CONCLUSIONS: The DLM delivered reasonable performance for sepsis screening using 12-, 6-, and single-lead ECGs. The results suggest that sepsis can be screened using not only conventional ECG devices but also diverse life-type ECG machines employing the DLM, thereby preventing irreversible disease progression and mortality.


Subject(s)
COVID-19 , Deep Learning , Sepsis , Electrocardiography , Humans , Retrospective Studies , SARS-CoV-2 , Sepsis/diagnosis
2.
BMC Med Inform Decis Mak ; 22(1): 284, 2022 11 02.
Article in English | MEDLINE | ID: covidwho-2098335

ABSTRACT

BACKGROUND: The sensitivity of RT-PCR in diagnosing COVID-19 is only 60-70%, and chest CT plays an indispensable role in the auxiliary diagnosis of COVID-19 pneumonia, but the results of CT imaging are highly dependent on professional radiologists. AIMS: This study aimed to develop a deep learning model to assist radiologists in detecting COVID-19 pneumonia. METHODS: The total study population was 437. The training dataset contained 26,477, 2468, and 8104 CT images of normal, CAP, and COVID-19, respectively. The validation dataset contained 14,076, 1028, and 3376 CT images of normal, CAP, and COVID-19 patients, respectively. The test set included 51 normal cases, 28 CAP patients, and 51 COVID-19 patients. We designed and trained a deep learning model to recognize normal, CAP, and COVID-19 patients based on U-Net and ResNet-50. Moreover, the diagnoses of the deep learning model were compared with different levels of radiologists. RESULTS: In the test set, the sensitivity of the deep learning model in diagnosing normal cases, CAP, and COVID-19 patients was 98.03%, 89.28%, and 92.15%, respectively. The diagnostic accuracy of the deep learning model was 93.84%. In the validation set, the accuracy was 92.86%, which was better than that of two novice doctors (86.73% and 87.75%) and almost equal to that of two experts (94.90% and 93.88%). The AI model performed significantly better than all four radiologists in terms of time consumption (35 min vs. 75 min, 93 min, 79 min, and 82 min). CONCLUSION: The AI model we obtained had strong decision-making ability, which could potentially assist doctors in detecting COVID-19 pneumonia.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , SARS-CoV-2 , Tomography, X-Ray Computed/methods , Research Design
3.
Korean J Radiol ; 21(10): 1150-1160, 2020 10.
Article in English | MEDLINE | ID: covidwho-2089785

ABSTRACT

OBJECTIVE: To describe the experience of implementing a deep learning-based computer-aided detection (CAD) system for the interpretation of chest X-ray radiographs (CXR) of suspected coronavirus disease (COVID-19) patients and investigate the diagnostic performance of CXR interpretation with CAD assistance. MATERIALS AND METHODS: In this single-center retrospective study, initial CXR of patients with suspected or confirmed COVID-19 were investigated. A commercialized deep learning-based CAD system that can identify various abnormalities on CXR was implemented for the interpretation of CXR in daily practice. The diagnostic performance of radiologists with CAD assistance were evaluated based on two different reference standards: 1) real-time reverse transcriptase-polymerase chain reaction (rRT-PCR) results for COVID-19 and 2) pulmonary abnormality suggesting pneumonia on chest CT. The turnaround times (TATs) of radiology reports for CXR and rRT-PCR results were also evaluated. RESULTS: Among 332 patients (male:female, 173:159; mean age, 57 years) with available rRT-PCR results, 16 patients (4.8%) were diagnosed with COVID-19. Using CXR, radiologists with CAD assistance identified rRT-PCR positive COVID-19 patients with sensitivity and specificity of 68.8% and 66.7%, respectively. Among 119 patients (male:female, 75:44; mean age, 69 years) with available chest CTs, radiologists assisted by CAD reported pneumonia on CXR with a sensitivity of 81.5% and a specificity of 72.3%. The TATs of CXR reports were significantly shorter than those of rRT-PCR results (median 51 vs. 507 minutes; p < 0.001). CONCLUSION: Radiologists with CAD assistance could identify patients with rRT-PCR-positive COVID-19 or pneumonia on CXR with a reasonably acceptable performance. In patients suspected with COVID-19, CXR had much faster TATs than rRT-PCRs.


Subject(s)
Betacoronavirus , Coronavirus Infections/diagnostic imaging , Deep Learning , Pneumonia, Viral/diagnostic imaging , Radiography, Thoracic , Adult , Aged , COVID-19 , Female , Humans , Male , Middle Aged , Pandemics , Radiography, Thoracic/methods , Retrospective Studies , SARS-CoV-2 , Tomography, X-Ray Computed/methods
4.
Crit Rev Biomed Eng ; 50(3): 1-17, 2022.
Article in English | MEDLINE | ID: covidwho-2089528

ABSTRACT

Coronavirus is a RNA type virus, which makes various respiratory infections in both human as well as animals. In addition, it could cause pneumonia in humans. The Coronavirus affected patients has been increasing day to day, due to the wide spread of diseases. As the count of corona affected patients increases, most of the regions are facing the issue of test kit shortage. In order to resolve this issue, the deep learning approach provides a better solution for automatically detecting the COVID-19 disease. In this research, an optimized deep learning approach, named Henry gas water wave optimization-based deep generative adversarial network (HGWWO-Deep GAN) is developed. Here, the HGWWO algorithm is designed by the hybridization of Henry gas solubility optimization (HGSO) and water wave optimization (WWO) algorithm. The pre-processing method is carried out using region of interest (RoI) and median filtering in order to remove the noise from the images. Lung lobe segmentation is carried out using U-net architecture and lung region extraction is done using convolutional neural network (CNN) features. Moreover, the COVID-19 detection is done using Deep GAN trained by the HGWWO algorithm. The experimental result demonstrates that the developed model attained the optimal performance based on the testing accuracy of 0.9169, sensitivity of 0.9328, and specificity of 0.9032.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , X-Rays , Neural Networks, Computer , Water
5.
Contrast Media Mol Imaging ; 2022: 1306664, 2022.
Article in English | MEDLINE | ID: covidwho-2088963

ABSTRACT

Artificial Intelligence (AI) has been applied successfully in many real-life domains for solving complex problems. With the invention of Machine Learning (ML) paradigms, it becomes convenient for researchers to predict the outcome based on past data. Nowadays, ML is acting as the biggest weapon against the COVID-19 pandemic by detecting symptomatic cases at an early stage and warning people about its futuristic effects. It is observed that COVID-19 has blown out globally so much in a short period because of the shortage of testing facilities and delays in test reports. To address this challenge, AI can be effectively applied to produce fast as well as cost-effective solutions. Plenty of researchers come up with AI-based solutions for preliminary diagnosis using chest CT Images, respiratory sound analysis, voice analysis of symptomatic persons with asymptomatic ones, and so forth. Some AI-based applications claim good accuracy in predicting the chances of being COVID-19-positive. Within a short period, plenty of research work is published regarding the identification of COVID-19. This paper has carefully examined and presented a comprehensive survey of more than 110 papers that came from various reputed sources, that is, Springer, IEEE, Elsevier, MDPI, arXiv, and medRxiv. Most of the papers selected for this survey presented candid work to detect and classify COVID-19, using deep-learning-based models from chest X-Rays and CT scan images. We hope that this survey covers most of the work and provides insights to the research community in proposing efficient as well as accurate solutions for fighting the pandemic.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , Pandemics , Artificial Intelligence , SARS-CoV-2
6.
EBioMedicine ; 85: 104315, 2022 Nov.
Article in English | MEDLINE | ID: covidwho-2086128

ABSTRACT

BACKGROUND: Hepatic steatosis (HS) identified on CT may provide an integrated cardiometabolic and COVID-19 risk assessment. This study presents a deep-learning-based hepatic fat assessment (DeHFt) pipeline for (a) more standardised measurements and (b) investigating the association between HS (liver-to-spleen attenuation ratio <1 in CT) and COVID-19 infections severity, wherein severity is defined as requiring invasive mechanical ventilation, extracorporeal membrane oxygenation, death. METHODS: DeHFt comprises two steps. First, a deep-learning-based segmentation model (3D residual-UNet) is trained (N.ß=.ß80) to segment the liver and spleen. Second, CT attenuation is estimated using slice-based and volumetric-based methods. DeHFt-based mean liver and liver-to-spleen attenuation are compared with an expert's ROI-based measurements. We further obtained the liver-to-spleen attenuation ratio in a large multi-site cohort of patients with COVID-19 infections (D1, N.ß=.ß805; D2, N.ß=.ß1917; D3, N.ß=.ß169) using the DeHFt pipeline and investigated the association between HS and COVID-19 infections severity. FINDINGS: The DeHFt pipeline achieved a dice coefficient of 0.95, 95% CI [0.93...0.96] on the independent validation cohort (N.ß=.ß49). The automated slice-based and volumetric-based liver and liver-to-spleen attenuation estimations strongly correlated with expert's measurement. In the COVID-19 cohorts, severe infections had a higher proportion of patients with HS than non-severe infections (pooled OR.ß=.ß1.50, 95% CI [1.20...1.88], P.ß<.ß.001). INTERPRETATION: The DeHFt pipeline enabled accurate segmentation of liver and spleen on non-contrast CTs and automated estimation of liver and liver-to-spleen attenuation ratio. In three cohorts of patients with COVID-19 infections (N.ß=.ß2891), HS was associated with disease severity. Pending validation, DeHFt provides an automated CT-based metabolic risk assessment. FUNDING: For a full list of funding bodies, please see the Acknowledgements.


Subject(s)
COVID-19 , Deep Learning , Fatty Liver , Humans , Retrospective Studies , Tomography, X-Ray Computed/methods , Fatty Liver/diagnostic imaging , Severity of Illness Index
7.
Sensors (Basel) ; 22(20)2022 Oct 21.
Article in English | MEDLINE | ID: covidwho-2082200

ABSTRACT

Human ideas and sentiments are mirrored in facial expressions. They give the spectator a plethora of social cues, such as the viewer's focus of attention, intention, motivation, and mood, which can help develop better interactive solutions in online platforms. This could be helpful for children while teaching them, which could help in cultivating a better interactive connect between teachers and students, since there is an increasing trend toward the online education platform due to the COVID-19 pandemic. To solve this, the authors proposed kids' emotion recognition based on visual cues in this research with a justified reasoning model of explainable AI. The authors used two datasets to work on this problem; the first is the LIRIS Children Spontaneous Facial Expression Video Database, and the second is an author-created novel dataset of emotions displayed by children aged 7 to 10. The authors identified that the LIRIS dataset has achieved only 75% accuracy, and no study has worked further on this dataset in which the authors have achieved the highest accuracy of 89.31% and, in the authors' dataset, an accuracy of 90.98%. The authors also realized that the face construction of children and adults is different, and the way children show emotions is very different and does not always follow the same way of facial expression for a specific emotion as compared with adults. Hence, the authors used 3D 468 landmark points and created two separate versions of the dataset from the original selected datasets, which are LIRIS-Mesh and Authors-Mesh. In total, all four types of datasets were used, namely LIRIS, the authors' dataset, LIRIS-Mesh, and Authors-Mesh, and a comparative analysis was performed by using seven different CNN models. The authors not only compared all dataset types used on different CNN models but also explained for every type of CNN used on every specific dataset type how test images are perceived by the deep-learning models by using explainable artificial intelligence (XAI), which helps in localizing features contributing to particular emotions. The authors used three methods of XAI, namely Grad-CAM, Grad-CAM++, and SoftGrad, which help users further establish the appropriate reason for emotion detection by knowing the contribution of its features in it.


Subject(s)
COVID-19 , Deep Learning , Adult , Child , Animals , Humans , Artificial Intelligence , Pandemics , Emotions
8.
Sensors (Basel) ; 22(20)2022 Oct 19.
Article in English | MEDLINE | ID: covidwho-2082155

ABSTRACT

COVID-19 has infected millions of people worldwide over the past few years. The main technique used for COVID-19 detection is reverse transcription, which is expensive, sensitive, and requires medical expertise. X-ray imaging is an alternative and more accessible technique. This study aimed to improve detection accuracy to create a computer-aided diagnostic tool. Combining other artificial intelligence applications techniques with radiological imaging can help detect different diseases. This study proposes a technique for the automatic detection of COVID-19 and other chest-related diseases using digital chest X-ray images of suspected patients by applying transfer learning (TL) algorithms. For this purpose, two balanced datasets, Dataset-1 and Dataset-2, were created by combining four public databases and collecting images from recently published articles. Dataset-1 consisted of 6000 chest X-ray images with 1500 for each class. Dataset-2 consisted of 7200 images with 1200 for each class. To train and test the model, TL with nine pretrained convolutional neural networks (CNNs) was used with augmentation as a preprocessing method. The network was trained to classify using five classifiers: two-class classifier (normal and COVID-19); three-class classifier (normal, COVID-19, and viral pneumonia), four-class classifier (normal, viral pneumonia, COVID-19, and tuberculosis (Tb)), five-class classifier (normal, bacterial pneumonia, COVID-19, Tb, and pneumothorax), and six-class classifier (normal, bacterial pneumonia, COVID-19, viral pneumonia, Tb, and pneumothorax). For two, three, four, five, and six classes, our model achieved a maximum accuracy of 99.83, 98.11, 97.00, 94.66, and 87.29%, respectively.


Subject(s)
COVID-19 , Deep Learning , Pneumonia, Bacterial , Pneumonia, Viral , Pneumothorax , Humans , COVID-19/diagnosis , SARS-CoV-2 , Artificial Intelligence
9.
Biomed Eng Online ; 21(1): 77, 2022 Oct 14.
Article in English | MEDLINE | ID: covidwho-2079424

ABSTRACT

OBJECTIVES: To use deep learning of serial portable chest X-ray (pCXR) and clinical variables to predict mortality and duration on invasive mechanical ventilation (IMV) for Coronavirus disease 2019 (COVID-19) patients. METHODS: This is a retrospective study. Serial pCXR and serial clinical variables were analyzed for data from day 1, day 5, day 1-3, day 3-5, or day 1-5 on IMV (110 IMV survivors and 76 IMV non-survivors). The outcome variables were duration on IMV and mortality. With fivefold cross-validation, the performance of the proposed deep learning system was evaluated by receiver operating characteristic (ROC) analysis and correlation analysis. RESULTS: Predictive models using 5-consecutive-day data outperformed those using 3-consecutive-day and 1-day data. Prediction using data closer to the outcome was generally better (i.e., day 5 data performed better than day 1 data, and day 3-5 data performed better than day 1-3 data). Prediction performance was generally better for the combined pCXR and non-imaging clinical data than either alone. The combined pCXR and non-imaging data of 5 consecutive days predicted mortality with an accuracy of 85 ± 3.5% (95% confidence interval (CI)) and an area under the curve (AUC) of 0.87 ± 0.05 (95% CI) and predicted the duration needed to be on IMV to within 2.56 ± 0.21 (95% CI) days on the validation dataset. CONCLUSIONS: Deep learning of longitudinal pCXR and clinical data have the potential to accurately predict mortality and duration on IMV in COVID-19 patients. Longitudinal pCXR could have prognostic value if these findings can be validated in a large, multi-institutional cohort.


Subject(s)
COVID-19 , Deep Learning , Respiration Disorders , COVID-19/diagnostic imaging , COVID-19/therapy , Humans , Retrospective Studies , Ventilators, Mechanical , X-Rays
10.
BMC Med Imaging ; 22(1): 178, 2022 10 15.
Article in English | MEDLINE | ID: covidwho-2079397

ABSTRACT

BACKGROUND: Nowadays doctors and radiologists are overwhelmed with a huge amount of work. This led to the effort to design different Computer-Aided Diagnosis systems (CAD system), with the aim of accomplishing a faster and more accurate diagnosis. The current development of deep learning is a big opportunity for the development of new CADs. In this paper, we propose a novel architecture for a convolutional neural network (CNN) ensemble for classifying chest X-ray (CRX) images into four classes: viral Pneumonia, Tuberculosis, COVID-19, and Healthy. Although Computed tomography (CT) is the best way to detect and diagnoses pulmonary issues, CT is more expensive than CRX. Furthermore, CRX is commonly the first step in the diagnosis, so it's very important to be accurate in the early stages of diagnosis and treatment. RESULTS: We applied the transfer learning technique and data augmentation to all CNNs for obtaining better performance. We have designed and evaluated two different CNN-ensembles: Stacking and Voting. This system is ready to be applied in a CAD system to automated diagnosis such a second or previous opinion before the doctors or radiology's. Our results show a great improvement, 99% accuracy of the Stacking Ensemble and 98% of accuracy for the the Voting Ensemble. CONCLUSIONS: To minimize missclassifications, we included six different base CNN models in our architecture (VGG16, VGG19, InceptionV3, ResNet101V2, DenseNet121 and CheXnet) and it could be extended to any number as well as we expect extend the number of diseases to detected. The proposed method has been validated using a large dataset created by mixing several public datasets with different image sizes and quality. As we demonstrate in the evaluation carried out, we reach better results and generalization compared with previous works. In addition, we make a first approach to explainable deep learning with the objective of providing professionals more information that may be valuable when evaluating CRXs.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnostic imaging , COVID-19 Testing , Computers , Humans , Neural Networks, Computer , X-Rays
11.
Sci Rep ; 12(1): 17581, 2022 Oct 20.
Article in English | MEDLINE | ID: covidwho-2077106

ABSTRACT

Our automated deep learning-based approach identifies consolidation/collapse in LUS images to aid in the identification of late stages of COVID-19 induced pneumonia, where consolidation/collapse is one of the possible associated pathologies. A common challenge in training such models is that annotating each frame of an ultrasound video requires high labelling effort. This effort in practice becomes prohibitive for large ultrasound datasets. To understand the impact of various degrees of labelling precision, we compare labelling strategies to train fully supervised models (frame-based method, higher labelling effort) and inaccurately supervised models (video-based methods, lower labelling effort), both of which yield binary predictions for LUS videos on a frame-by-frame level. We moreover introduce a novel sampled quaternary method which randomly samples only 10% of the LUS video frames and subsequently assigns (ordinal) categorical labels to all frames in the video based on the fraction of positively annotated samples. This method outperformed the inaccurately supervised video-based method and more surprisingly, the supervised frame-based approach with respect to metrics such as precision-recall area under curve (PR-AUC) and F1 score, despite being a form of inaccurate learning. We argue that our video-based method is more robust with respect to label noise and mitigates overfitting in a manner similar to label smoothing. The algorithm was trained using a ten-fold cross validation, which resulted in a PR-AUC score of 73% and an accuracy of 89%. While the efficacy of our classifier using the sampled quaternary method significantly lowers the labelling effort, it must be verified on a larger consolidation/collapse dataset, our proposed classifier using the sampled quaternary video-based method is clinically comparable with trained experts' performance.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , Ultrasonography/methods , Algorithms , Lung/diagnostic imaging
12.
Sci Rep ; 12(1): 17417, 2022 Oct 18.
Article in English | MEDLINE | ID: covidwho-2077093

ABSTRACT

The objectives of our proposed study were as follows: First objective is to segment the CT images using a k-means clustering algorithm for extracting the region of interest and to extract textural features using gray level co-occurrence matrix (GLCM). Second objective is to implement machine learning classifiers such as Naïve bayes, bagging and Reptree to classify the images into two image classes namely COVID and non-COVID and to compare the performance of the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet with that of the proposed machine learning classifiers. Our dataset consists of 100 COVID and non-COVID images which are pre-processed and segmented with our proposed algorithm. Following the feature extraction process, three machine learning classifiers (Naive Bayes, Bagging, and REPTree) were used to classify the normal and covid patients. We had implemented the three pre-trained CNN models such as AlexNet, ResNet50 and SqueezeNet for comparing their performance with machine learning classifiers. In machine learning, the Naive Bayes classifier achieved the highest accuracy of 97%, whereas the ResNet50 CNN model attained the highest accuracy of 99%. Hence the deep learning networks outperformed well compared to the machine learning techniques in the classification of Covid-19 images.


Subject(s)
COVID-19 , Deep Learning , Humans , COVID-19/diagnostic imaging , Bayes Theorem , Machine Learning , Tomography, X-Ray Computed , Lung/diagnostic imaging
13.
Int J Environ Res Public Health ; 19(20)2022 Oct 15.
Article in English | MEDLINE | ID: covidwho-2071461

ABSTRACT

Emotional responses are significant for understanding public perceptions of urban green space (UGS) and can be used to inform proposals for optimal urban design strategies to enhance public emotional health in the times of COVID-19. However, most empirical studies fail to consider emotion-oriented landscape assessments under dynamic perspectives despite the fact that individually observed sceneries alter with angle. To close this gap, a real-time sentimental-based landscape assessment framework is developed, integrating facial expression recognition with semantic segmentation of changing landscapes. Furthermore, a case study using panoramic videos converted from Google Street View images to simulate changing scenes was used to test the viability of this framework, resulting in five million big data points. The result of this study shows that through the collaboration of deep learning algorithms, finer visual variables were classified, subtle emotional responses were tracked, and better regression results for valence and arousal were obtained. Among all the predictors, the proportion of grass was the most significant predictor for emotional perception. The proposed framework is adaptable and human-centric, and it enables the instantaneous emotional perception of the built environment by the general public as a feedback survey tool to aid urban planners in creating UGS that promote emotional well-being.


Subject(s)
COVID-19 , Deep Learning , Facial Recognition , Humans , Semantics , Emotions/physiology
14.
Future Med Chem ; 14(21): 1541-1559, 2022 11.
Article in English | MEDLINE | ID: covidwho-2055773

ABSTRACT

Background: In the recent COVID-19 pandemic, SARS-CoV-2 infection spread worldwide. The 3C-like protease (3CLpro) is a promising drug target for SARS-CoV-2. Results: We constructed a deep learning-based convolutional neural network-quantitative structure-activity relationship (CNN-QSAR) model and deployed it on various databases to predict the biological activity of 3CLpro inhibitors. Subsequently, molecular docking analysis, molecular dynamics simulations and binding free energy calculations were performed to validate the predicted inhibitory activity against 3CLpro of SARS-CoV-2. The model showed mean squared error = 0.114, mean absolute error = 0.24 and predicted R2 = 0.84 for the test dataset. Diosmin showed good binding affinity and stability over the course of the simulations. Conclusion: The results suggest that the proposed CNN-QSAR model can be an efficient method for hit prediction and a new way to identify hit compounds against 3CLpro of SARS-CoV-2.


Subject(s)
COVID-19 , Deep Learning , Humans , SARS-CoV-2 , Quantitative Structure-Activity Relationship , Coronavirus 3C Proteases , Pandemics , Molecular Docking Simulation , Peptide Hydrolases , Protease Inhibitors/chemistry , Molecular Dynamics Simulation , Antiviral Agents/pharmacology
15.
Contrast Media Mol Imaging ; 2022: 5297709, 2022.
Article in English | MEDLINE | ID: covidwho-2053415

ABSTRACT

Coronavirus 2019 (COVID-19) has become a pandemic. The seriousness of COVID-19 can be realized from the number of victims worldwide and large number of deaths. This paper presents an efficient deep semantic segmentation network (DeepLabv3Plus). Initially, the dynamic adaptive histogram equalization is utilized to enhance the images. Data augmentation techniques are then used to augment the enhanced images. The second stage builds a custom convolutional neural network model using several pretrained ImageNet models and compares them to repeatedly trim the best-performing models to reduce complexity and improve memory efficiency. Several experiments were done using different techniques and parameters. Furthermore, the proposed model achieved an average accuracy of 99.6% and an area under the curve of 0.996 in the COVID-19 detection. This paper will discuss how to train a customized smart convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Artificial Intelligence , COVID-19/diagnostic imaging , Humans , SARS-CoV-2 , Semantics
16.
PLoS One ; 17(10): e0274098, 2022.
Article in English | MEDLINE | ID: covidwho-2054336

ABSTRACT

In response to the COVID-19 global pandemic, recent research has proposed creating deep learning based models that use chest radiographs (CXRs) in a variety of clinical tasks to help manage the crisis. However, the size of existing datasets of CXRs from COVID-19+ patients are relatively small, and researchers often pool CXR data from multiple sources, for example, using different x-ray machines in various patient populations under different clinical scenarios. Deep learning models trained on such datasets have been shown to overfit to erroneous features instead of learning pulmonary characteristics in a phenomenon known as shortcut learning. We propose adding feature disentanglement to the training process. This technique forces the models to identify pulmonary features from the images and penalizes them for learning features that can discriminate between the original datasets that the images come from. We find that models trained in this way indeed have better generalization performance on unseen data; in the best case we found that it improved AUC by 0.13 on held out data. We further find that this outperforms masking out non-lung parts of the CXRs and performing histogram equalization, both of which are recently proposed methods for removing biases in CXR datasets.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnostic imaging , Humans , Lung/diagnostic imaging , Radiography, Thoracic/methods , X-Rays
17.
J Med Syst ; 46(11): 78, 2022 Oct 06.
Article in English | MEDLINE | ID: covidwho-2048414

ABSTRACT

Monkeypox virus is emerging slowly with the decline of COVID-19 virus infections around the world. People are afraid of it, thinking that it would appear as a pandemic like COVID-19. As such, it is crucial to detect them earlier before widespread community transmission. AI-based detection could help identify them at the early stage. In this paper, we aim to compare 13 different pre-trained deep learning (DL) models for the Monkeypox virus detection. For this, we initially fine-tune them with the addition of universal custom layers for all of them and analyse the results using four well-established measures: Precision, Recall, F1-score, and Accuracy. After the identification of the best-performing DL models, we ensemble them to improve the overall performance using a majority voting over the probabilistic outputs obtained from them. We perform our experiments on a publicly available dataset, which results in average Precision, Recall, F1-score, and Accuracy of 85.44%, 85.47%, 85.40%, and 87.13%, respectively with the help of our proposed ensemble approach. These encouraging results, which outperform the state-of-the-art methods, suggest that the proposed approach is applicable to health practitioners for mass screening.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnosis , Humans , Monkeypox virus , Pandemics
18.
Front Public Health ; 10: 948205, 2022.
Article in English | MEDLINE | ID: covidwho-2039752

ABSTRACT

Coronavirus disease 2019 (COVID-19) is a highly contagious disease that has claimed the lives of millions of people worldwide in the last 2 years. Because of the disease's rapid spread, it is critical to diagnose it at an early stage in order to reduce the rate of spread. The images of the lungs are used to diagnose this infection. In the last 2 years, many studies have been introduced to help with the diagnosis of COVID-19 from chest X-Ray images. Because all researchers are looking for a quick method to diagnose this virus, deep learning-based computer controlled techniques are more suitable as a second opinion for radiologists. In this article, we look at the issue of multisource fusion and redundant features. We proposed a CNN-LSTM and improved max value features optimization framework for COVID-19 classification to address these issues. The original images are acquired and the contrast is increased using a combination of filtering algorithms in the proposed architecture. The dataset is then augmented to increase its size, which is then used to train two deep learning networks called Modified EfficientNet B0 and CNN-LSTM. Both networks are built from scratch and extract information from the deep layers. Following the extraction of features, the serial based maximum value fusion technique is proposed to combine the best information of both deep models. However, a few redundant information is also noted; therefore, an improved max value based moth flame optimization algorithm is proposed. Through this algorithm, the best features are selected and finally classified through machine learning classifiers. The experimental process was conducted on three publically available datasets and achieved improved accuracy than the existing techniques. Moreover, the classifiers based comparison is also conducted and the cubic support vector machine gives better accuracy.


Subject(s)
COVID-19 , Deep Learning , Moths , Animals , Humans , Neural Networks, Computer , X-Rays
19.
Comput Intell Neurosci ; 2022: 5386737, 2022.
Article in English | MEDLINE | ID: covidwho-2038374

ABSTRACT

This work aims to solve the problem that the daily necessities of urban residents cannot be delivered during coronavirus disease 2019 (COVID-19), thereby reducing the possibility of the delivery personnel contracting COVID-19 due to the need to transport medicines to the hospital during the epidemic. Firstly, this work studies the application and communication optimization technology of unmanned delivery cars based on deep learning (DL) under COVID-19. Secondly, a route planning method for unmanned delivery cars based on the DL method is proposed under the influence of factors such as maximum flight time, load, and road conditions. This work analyzes and introduces unmanned delivery cars from four aspects combined with the actual operation of unmanned delivery cars and related literature: the characteristics, delivery mode, economy, and limitations of unmanned delivery cars. The unmanned delivery car is in the promotion stage. A basic AVRPTW model is established that minimizes the total delivery cost without considering the charging behavior under the restriction of some routes, delivery time, load, and other factors. The path optimization problem of unmanned delivery cars in various situations is considered. A multiobjective optimization model of the unmanned delivery car in the charging/swap mode is established with the goal of minimizing the total delivery cost and maximizing customer satisfaction under the premise of meeting the car driving requirements. An improved genetic algorithm is designed to solve the established model. Finally, the model is tested, and its results are analyzed. The effectiveness of this route planning method is proved through case analysis. Customer satisfaction, delivery time, cost input, and other aspects have been greatly improved through the improvement and optimization of the unmanned delivery car line, which has been well applied in practice. In addition, unmanned delivery cars are affected by many factors such as load, and the service time required for delivery is longer. Therefore, this work chooses an unmanned distribution car with strong endurance to improve distribution efficiency. The new hospital contactless distribution mode discussed here will play an important role in promoting future development.


Subject(s)
COVID-19 , Deep Learning , Automobiles , Computer Communication Networks , Humans , Technology
20.
Analyst ; 147(20): 4616-4628, 2022 Oct 10.
Article in English | MEDLINE | ID: covidwho-2036936

ABSTRACT

Apart from other severe consequences, the COVID-19 pandemic has inflicted a surge in personal protective equipment usage, some of which, such as medical masks, have a short effective protection time. Their misdisposition and subsequent natural degradation make them huge sources of micro- and nanoplastic particles. To better understand the consequences of the direct influence of microplastic pollution on biota, there is an urgent need to develop a reliable and high-throughput analytical tool for sub-micrometre plastic identification and visualisation in environmental and biological samples. This study evaluated the application of a combined technique based on dark-field enhanced microscopy and hyperspectral imaging augmented with deep learning data analysis for the visualisation, detection and identification of microplastic particles released from commercially available medical masks after 192 hours of UV-C irradiation. The analysis was performed using a separated blue-coloured spunbond outer layer and white-coloured meltblown interlayer that allowed us to assess the influence of the structure and pigmentation of intact and UV-exposed samples on classification performance. Microscopy revealed strong fragmentation of both layers and the formation of microparticles and fibres of various shapes after UV exposure. Based on the spectral signatures of both layers, it was possible to identify intact materials using a convolutional neural network successfully. However, the further classification of UV-exposed samples demonstrated that the spectral characteristics of samples in the visible to near-infrared range are disrupted, causing a decreased performance of the CNN. Despite this, the application of a deep learning algorithm in hyperspectral analysis outperformed the conventional spectral angle mapper technique in classifying both intact and UV-exposed samples, confirming the potential of the proposed approach in secondary microplastic analysis.


Subject(s)
COVID-19 , Deep Learning , COVID-19/diagnosis , Humans , Hyperspectral Imaging , Masks , Microplastics , Pandemics , Plastics
SELECTION OF CITATIONS
SEARCH DETAIL