ABSTRACT
BACKGROUND: The novel coronavirus pneumonia that began to spread in 2019 is still raging and has placed a burden on medical systems and governments in various countries. For policymaking and medical resource decisions, a good prediction model is necessary to monitor and evaluate the trends of the epidemic. We used a long short-term memory (LSTM) model and the improved hybrid model based on ensemble empirical mode decomposition (EEMD) to predict COVID-19 trends; Methods: The data were collected from the Harvard Dataverse. Epidemic data from 21 January 2020 to 25 April 2021 for California, the most severely affected state in the United States, were used to develop an LSTM model and an EEMD-LSTM hybrid model, which is an LSTM model combined with ensemble empirical mode decomposition. In this study, ninety percent of the data were adopted to fit the models as a training set, while the subsequent 10% were used to test the prediction effect of the models. The mean absolute percentage error, mean absolute error, and root mean square error were used to evaluate the prediction performances of the models; Results: The results indicated that the number of confirmed cases in California was increasing as of 25 April 2021, with no obvious evidence of a sharp decline. On 25 April 2021, the LSTM model predicted 3666418 confirmed cases, whereas the EEMD-LSTM predicted 3681150. The mean absolute percentage errors for the LSTM and EEMD-LSTM models were 0.0151 and 0.0051, respectively. The mean absolute and root mean square errors were 5.58 × 104 and 5.63 × 104 for the LSTM model and 1.9 × 104 and 2.43 × 104 for the EEMD-LSTM model, respectively; Conclusions: The results showed the advantage of an EEMD-LSTM model over a single LSTM model, and the established EEMD-LSTM model may be suitable for monitoring and evaluating the epidemic situation and providing quantitative analysis evidence for epidemic prevention and control.
Subject(s)
COVID-19 , Deep Learning , Epidemics , Humans , Neural Networks, Computer , COVID-19/epidemiology , ForecastingABSTRACT
Due to the prevalence of COVID-19, providing safe environments and reducing the risks of virus exposure play pivotal roles in our daily lives. Contact tracing is a well-established and widely-used approach to track and suppress the spread of viruses. Most digital contact tracing systems can detect direct face-to-face contact based on estimated proximity, without quantifying the exposed virus concentration. In particular, they rarely allow for quantitative analysis of indirect environmental exposure due to virus survival time in the air and constant airborne transmission. In this work, we propose an indoor spatiotemporal contact awareness framework (iSTCA), which explicitly considers the self-containing quantitative contact analytics approach with spatiotemporal information to provide accurate awareness of the virus quanta concentration in different origins at various times. Smartphone-based pedestrian dead reckoning (PDR) is employed to precisely detect the locations and trajectories for distance estimation and time assessment without the need to deploy extra infrastructure. The PDR technique we employ calibrates the accumulative error by identifying spatial landmarks automatically. We utilized a custom deep learning model composed of bidirectional long short-term memory (Bi-LSTM) and multi-head convolutional neural networks (CNNs) for extracting the local correlation and long-term dependency to recognize landmarks. By considering the spatial distance and time difference in an integrated manner, we can quantify the virus quanta concentration of the entire indoor environment at any time with all contributed virus particles. We conducted an extensive experiment based on practical scenarios to evaluate the performance of the proposed system, showing that the average positioning error is reduced to less than 0.7 m with high confidence and demonstrating the validity of our system for the virus quanta concentration quantification involving virus movement in a complex indoor environment.
Subject(s)
COVID-19 , Pedestrians , Humans , Algorithms , Smartphone , Neural Networks, ComputerABSTRACT
The convolutional neural networks (CNNs) have been widely proposed in the medical image analysis tasks, especially in the image segmentations. In recent years, the encoder-decoder structures, such as the U-Net, were rendered. However, the multi-scale information transmission and effective modeling for long-range feature dependencies in these structures were not sufficiently considered. To improve the performance of the existing methods, we propose a novel hybrid dual dilated attention network (HD2A-Net) to conduct the lesion region segmentations. In the proposed network, we innovatively present the comprehensive hybrid dilated convolution (CHDC) module, which facilitates the transmission of the multi-scale information. Based on the CHDC module and the attention mechanisms, we design a novel dual dilated gated attention (DDGA) block to enhance the saliency of related regions from the multi-scale aspect. Besides, a dilated dense (DD) block is designed to expand the receptive fields. The ablation studies were performed to verify our proposed blocks. Besides, the interpretability of the HD2A-Net was analyzed through the visualization of the attention weight maps from the key blocks. Compared to the state-of-the-art methods including CA-Net, DeepLabV3+, and Attention U-Net, the HD2A-Net outperforms significantly, with the metrics of Dice, Average Symmetric Surface Distance (ASSD), and mean Intersection-over-Union (mIoU) reaching 93.16%, 93.63%, and 94.72%, 0.36 pix, 0.69 pix, and 0.52 pix, and 88.03%, 88.67%, and 90.33% on three publicly available medical image datasets: MAEDE-MAFTOUNI (COVID-19 CT), ISIC-2018 (Melanoma Dermoscopy), and Kvasir-SEG (Gastrointestinal Disease Polyp), respectively.
Subject(s)
COVID-19 , Melanoma , Humans , Benchmarking , Neural Networks, Computer , Image Processing, Computer-AssistedABSTRACT
This study proposed an infrared image-based method for febrile and subfebrile people screening to comply with the society need for alternative, quick response, and effective methods for COVID-19 contagious people screening. The methodology consisted of: (i) Developing a method based on facial infrared imaging for possible COVID-19 early detection in people with and without fever (subfebrile state); (ii) Using 1206 emergency room (ER) patients to develop an algorithm for general application of the method, and (iii) Testing the method and algorithm effectiveness in 2558 cases (RT-qPCR tested for COVID-19) from 227,261 workers evaluations in five different countries. Artificial intelligence was used through a convolutional neural network (CNN) to develop the algorithm that took facial infrared images as input and classified the tested individuals in three groups: fever (high risk), subfebrile (medium risk), and no fever (low risk). The results showed that suspicious and confirmed COVID-19 (+) cases characterized by temperatures below the 37.5 °C fever threshold were identified. Also, average forehead and eye temperatures greater than 37.5 °C were not enough to detect fever similarly to the proposed CNN algorithm. Most RT-qPCR confirmed COVID-19 (+) cases found in the 2558 cases sample (17 cases/89.5%) belonged to the CNN selected subfebrile group. The COVID-19 (+) main risk factor was to be in the subfebrile group, in comparison to age, diabetes, high blood pressure, smoking and others. In sum, the proposed method was shown to be a potentially important new tool for COVID-19 (+) people screening for air travel and public places in general.
Subject(s)
Air Travel , COVID-19 , Humans , Artificial Intelligence , COVID-19/diagnosis , Algorithms , Neural Networks, Computer , FeverABSTRACT
Mortality prediction is crucial to evaluate the severity of illness and assist in improving the prognosis of patients. In clinical settings, one way is to analyze the multivariate time series (MTSs) of patients based on their medical data, such as heart rates and invasive mean arterial blood pressure. However, this suffers from sparse, irregularly sampled, and incomplete data issues. These issues can compromise the performance of follow-up MTS-based analytic applications. Plenty of existing methods try to deal with such irregular MTSs with missing values by capturing the temporal dependencies within a time series, yet in-depth research on modeling inter-MTS couplings remains rare and lacks model interpretability. To this end, we propose a bidirectional time and multi-feature attention coupled network (BiT-MAC) to capture the temporal dependencies (i.e., intra-time series coupling) and the hidden relationships among variables (i.e., inter-time series coupling) with a bidirectional recurrent neural network and multi-head attention, respectively. The resulting intra- and inter-time series coupling representations are then fused to estimate the missing values for a more robust MTS-based prediction. We evaluate BiT-MAC by applying it to the missing-data corrupted mortality prediction on two real-world clinical datasets, i.e., PhysioNet'2012 and COVID-19. Extensive experiments demonstrate the superiority of BiT-MAC over cutting-edge models, verifying the great value of the deep and hidden relations captured by MTSs. The interpretability of features is further demonstrated through a case study.
Subject(s)
COVID-19 , Humans , Time Factors , Heart Rate , Neural Networks, ComputerABSTRACT
BACKGROUND AND OBJECTIVE: COVID-19 is a serious threat to human health. Traditional convolutional neural networks (CNNs) can realize medical image segmentation, whilst transformers can be used to perform machine vision tasks, because they have a better ability to capture long-range relationships than CNNs. The combination of CNN and transformers to complete the task of semantic segmentation has attracted intense research. Currently, it is challenging to segment medical images on limited data sets like that on COVID-19. METHODS: This study proposes a lightweight transformer+CNN model, in which the encoder sub-network is a two-path design that enables both the global dependence of image features and the low layer spatial details to be effectively captured. Using CNN and MobileViT to jointly extract image features reduces the amount of computation and complexity of the model as well as improves the segmentation performance. So this model is titled Mini-MobileViT-Seg (MMViT-Seg). In addition, a multi query attention (MQA) module is proposed to fuse the multi-scale features from different levels of decoder sub-network, further improving the performance of the model. MQA can simultaneously fuse multi-input, multi-scale low-level feature maps and high-level feature maps as well as conduct end-to-end supervised learning guided by ground truth. RESULTS: The two-class infection labeling experiments were conducted based on three datasets. The final results show that the proposed model has the best performance and the minimum number of parameters among five popular semantic segmentation algorithms. In multi-class infection labeling results, the proposed model also achieved competitive performance. CONCLUSIONS: The proposed MMViT-Seg is tested on three COVID-19 segmentation datasets, with results showing that this model has better performance than other models. In addition, the proposed MQA module, which can effectively fuse multi-scale features of different levels further improves the segmentation accuracy.
Subject(s)
COVID-19 , Humans , Algorithms , Neural Networks, Computer , Electric Power Supplies , Semantics , Image Processing, Computer-AssistedABSTRACT
In the imbalance data scenarios, Deep Neural Networks (DNNs) fail to generalize well on minority classes. In this letter, we propose a simple and effective learning function i.e, Visually Interpretable Space Adjustment Learning (VISAL) to handle the imbalanced data classification task. VISAL's objective is to create more room for the generalization of minority class samples by bringing in both the angular and euclidean margins into the cross-entropy learning strategy. When evaluated on the imbalanced versions of CIFAR, Tiny ImageNet, COVIDx and IMDB reviews datasets, our proposed method outperforms the state of the art works by a significant margin.
Subject(s)
Algorithms , Neural Networks, Computer , Machine Learning , Learning , Generalization, PsychologicalABSTRACT
Although medical research has been performed predominantly on men both in preclinical and clinical studies, continuous efforts have been made to overcome this gender bias. Examining retrospectively 21 data sets containing sex as one of the descriptive variables, it was possible to verify how many times our AI protocol decided to keep gender information in the predictive model. The data sets pertained a vast array of diseases such as dyspeptic syndrome, atrophic gastritis, venous thrombosis, gastroesophageal reflux disease, irritable bowel syndrome, Alzheimer diseases and mild cognitive impairment, myocardial infarction, gastrointestinal bleeding, gastric cancer, hypercortisolism, AIDS, COVID diagnosis, extracorporeal membrane oxygenation in intensive therapy, among others. The sample size of these data sets ranged between 80 and 3147 (average 600). The number of variables ranged from 19 to 101 (average 41). Gender resulted to be part of the heuristic predictive model 19 out of 21 times. This means that also for highly adaptive and potent tools like Artificial Neural Networks, information on sex carries a specific value. In the field of rheumatology, there is a specific example in psoriatic arthritis that shows that the presence of gender information allows a significantly better accuracy of ANNs in predicting diagnosis from clinical data (from 87.7% to 94.47%). The results of this study confirm the importance of gender information in building high performance predictive model in the field of Artificial Intelligence (AI). Therefore, also for AI, sex counts.
Subject(s)
Algorithms , Neural Networks, Computer , Female , Humans , Male , Artificial Intelligence , COVID-19 , Retrospective Studies , Rheumatic DiseasesABSTRACT
Expressive molecular representation plays critical roles in researching drug design, while effective methods are beneficial to learning molecular representations and solving related problems in drug discovery, especially for drug-drug interactions (DDIs) prediction. Recently, a lot of work has been put forward using graph neural networks (GNNs) to forecast DDIs and learn molecular representations. However, under the current GNNs structure, the majority of approaches learn drug molecular representation from one-dimensional string or two-dimensional molecular graph structure, while the interaction information between chemical substructure remains rarely explored, and it is neglected to identify key substructures that contribute significantly to the DDIs prediction. Therefore, we proposed a dual graph neural network named DGNN-DDI to learn drug molecular features by using molecular structure and interactions. Specifically, we first designed a directed message passing neural network with substructure attention mechanism (SA-DMPNN) to adaptively extract substructures. Second, in order to improve the final features, we separated the drug-drug interactions into pairwise interactions between each drug's unique substructures. Then, the features are adopted to predict interaction probability of a DDI tuple. We evaluated DGNN-DDI on real-world dataset. Compared to state-of-the-art methods, the model improved DDIs prediction performance. We also conducted case study on existing drugs aiming to predict drug combinations that may be effective for the novel coronavirus disease 2019 (COVID-19). Moreover, the visual interpretation results proved that the DGNN-DDI was sensitive to the structure information of drugs and able to detect the key substructures for DDIs. These advantages demonstrated that the proposed method enhanced the performance and interpretation capability of DDI prediction modeling.
Subject(s)
COVID-19 , Humans , Molecular Structure , Drug Interactions , Neural Networks, Computer , ProbabilityABSTRACT
Following its initial identification on December 31, 2019, COVID-19 quickly spread around the world as a pandemic claiming more than six million lives. An early diagnosis with appropriate intervention can help prevent deaths and serious illness as the distinguishing symptoms that set COVID-19 apart from pneumonia and influenza frequently don't show up until after the patient has already suffered significant damage. A chest X-ray (CXR), one of many imaging modalities that are useful for detection and one of the most used, offers a non-invasive method of detection. The CXR image analysis can also reveal additional disorders, such as pneumonia, which show up as anomalies in the lungs. Thus these CXRs can be used for automated grading aiding the doctors in making a better diagnosis. In order to classify a CXR image into the Negative for Pneumonia, Typical, Indeterminate, and Atypical, we used the publicly available CXR image competition dataset SIIM-FISABIO-RSNA COVID-19 from Kaggle. The suggested architecture employed an ensemble of EfficientNetv2-L for classification, which was trained via transfer learning from the initialised weights of ImageNet21K on various subsets of data (Code for the proposed methodology is available at: https://github.com/asadkhan1221/siim-covid19.git). To identify and localise opacities, an ensemble of YOLO was combined using Weighted Boxes Fusion (WBF). Significant generalisability gains were made possible by the suggested technique's addition of classification auxiliary heads to the CNN backbone. The suggested method improved further by utilising test time augmentation for both classifiers and localizers. The results for Mean Average Precision score show that the proposed deep learning model achieves 0.617 and 0.609 on public and private sets respectively and these are comparable to other techniques for the Kaggle dataset.
Subject(s)
COVID-19 , Pneumonia, Viral , Humans , COVID-19/diagnostic imaging , X-Rays , Pneumonia, Viral/diagnostic imaging , Thorax/diagnostic imaging , Neural Networks, ComputerABSTRACT
Due to the severity and speed of spread of the ongoing Covid-19 pandemic, fast but accurate diagnosis of Covid-19 patients has become a crucial task. Achievements in this respect might enlighten future efforts for the containment of other possible pandemics. Researchers from various fields have been trying to provide novel ideas for models or systems to identify Covid-19 patients from different medical and non-medical data. AI-based researchers have also been trying to contribute to this area by mostly providing novel approaches of automated systems using convolutional neural network (CNN) and deep neural network (DNN) for Covid-19 detection and diagnosis. Due to the efficiency of deep learning (DL) and transfer learning (TL) models in classification and segmentation tasks, most of the recent AI-based researches proposed various DL and TL models for Covid-19 detection and infected region segmentation from chest medical images like X-rays or CT images. This paper describes a web-based application framework for Covid-19 lung infection detection and segmentation. The proposed framework is characterized by a feedback mechanism for self learning and tuning. It uses variations of three popular DL models, namely Mask R-CNN, U-Net, and U-Net++. The models were trained, evaluated and tested using CT images of Covid patients which were collected from two different sources. The web application provide a simple user friendly interface to process the CT images from various resources using the chosen models, thresholds and other parameters to generate the decisions on detection and segmentation. The models achieve high performance scores for Dice similarity, Jaccard similarity, accuracy, loss, and precision values. The U-Net model outperformed the other models with more than 98% accuracy.
Subject(s)
COVID-19 , Trust , Humans , Feedback , COVID-19/diagnostic imaging , Pandemics , Neural Networks, ComputerABSTRACT
BACKGROUND: With the global spread of COVID-19, the world has seen many patients, including many severe cases. The rapid development of machine learning (ML) has made significant disease diagnosis and prediction achievements. Current studies have confirmed that omics data at the host level can reflect the development process and prognosis of the disease. Since early diagnosis and effective treatment of severe COVID-19 patients remains challenging, this research aims to use omics data in different ML models for COVID-19 diagnosis and prognosis. We used several ML models on omics data of a large number of individuals to first predict whether patients are COVID-19 positive or negative, followed by the severity of the disease. RESULTS: On the COVID-19 diagnosis task, we got the best AUC of 0.99 with our multilayer perceptron model and the highest F1-score of 0.95 with our logistic regression (LR) model. For the severity prediction task, we achieved the highest accuracy of 0.76 with an LR model. Beyond classification and predictive modeling, our study founds ML models performed better on integrated multi-omics data, rather than single omics. By comparing top features from different omics dataset, we also found the robustness of our model, with a wider range of applicability in diverse dataset related to COVID-19. Additionally, we have found that omics-based models performed better than image or physiological feature-based models, proving the importance of the omics-based dataset for future model development. CONCLUSIONS: This study diagnoses COVID-19 positive cases and predicts accurate severity levels. It lowers the dependence on clinical data and professional judgment, by leveraging the utilization of state-of-the-art models. our model showed wider applicability across different omics dataset, which is highly transferable in other respiratory or similar diseases. Hospital and public health care mechanisms can optimize the distribution of medical resources and improve the robustness of the medical system.
Subject(s)
COVID-19 Testing , COVID-19 , Humans , COVID-19/diagnosis , Machine Learning , Neural Networks, Computer , Logistic ModelsABSTRACT
BACKGROUND: Single-cell omics technology is rapidly developing to measure the epigenome, genome, and transcriptome across a range of cell types. However, it is still challenging to integrate omics data from different modalities. Here, we propose a variation of the Siamese neural network framework called MinNet, which is trained to integrate multi-omics data on the single-cell resolution by using graph-based contrastive loss. RESULTS: By training the model and testing it on several benchmark datasets, we showed its accuracy and generalizability in integrating scRNA-seq with scATAC-seq, and scRNA-seq with epitope data. Further evaluation demonstrated our model's unique ability to remove the batch effect, a common problem in actual practice. To show how the integration impacts downstream analysis, we established model-based smoothing and cis-regulatory element-inferring method and validated it with external pcHi-C evidence. Finally, we applied the framework to a COVID-19 dataset to bolster the original work with integration-based analysis, showing its necessity in single-cell multi-omics research. CONCLUSIONS: MinNet is a novel deep-learning framework for single-cell multi-omics sequencing data integration. It ranked top among other methods in benchmarking and is especially suitable for integrating datasets with batch and biological variances. With the single-cell resolution integration results, analysis of the interplay between genome and transcriptome can be done to help researchers understand their data and question.
Subject(s)
COVID-19 , Multiomics , Humans , Transcriptome , Neural Networks, Computer , Single-Cell Analysis/methodsABSTRACT
The mutual relationship among daily averaged PM10, PM2.5, and NO2 concentrations in two megacities (Seoul and Busan) connected by the busiest highway in Korea was investigated using an artificial neural network model (ANN)-sigmoid function, for a novel coronavirus (COVID-19) pandemic period from 1 January to 31 December 2020. Daily and weekly mean concentrations of NO2 in 2020 under neither locked down cities, nor limitation of the activities of vehicles and people by the Korean Government have decreased by about 15%, and 12% in Seoul, and Busan cities, than the ones in 2019, respectively. PM 10 (PM2.5) concentration has also decreased by 15% (10%), and 12% (10%) in Seoul, and Busan, with a similar decline of NO2, causing an improvement in air quality in each city. Multilayer perception (MLP), which has a back-propagation training algorithm for a feed-forward artificial neural network technique with a sigmoid activation function was adopted to predict daily averaged PM10, PM2.5, and NO2 concentrations in two cities with their interplay. Root mean square error (RMSE) with the coefficient of determination (R2) evaluates the performance of the model between the predicted and measured values of daily mean PM10, PM2.5, and NO2, in Seoul were 2.251 with 0.882 (1.909 with 0.896; 1.913 with 0.892), 0.717 with 0.925 (0.955 with 0.930; 0.955 with 0.922), and 3.502 with 0.729 (2.808 with 0.746; 3.481 with 0.734), in 2 (5; 7) nodes in a single hidden layer. Similarly, they in Busan were 2.155 with 0.853 (1.519 with 0.896; 1.649 with 0.869), 0.692 with 0.914 (0.891 with 0.910; 1.211 with 0.883), and 2.747 with 0.667 (2.277 with 0.669; 2.137 with 0.689), respectively. The closeness of the predicted values to the observed ones shows a very high Pearson r correlation coefficient of over 0.932, except for 0.818 of NO2 in Busan. Modeling performance using IBM SPSS-v27 software on daily averaged PM10, PM2.5, and NO2 concentrations in each city were compared by scatter plots and their daily distributions between predicted and observed values.
Subject(s)
Air Pollutants , Air Pollution , COVID-19 , Humans , Air Pollutants/analysis , COVID-19/epidemiology , Pandemics , Communicable Disease Control , Air Pollution/analysis , Cities , Neural Networks, Computer , Particulate Matter/analysis , Environmental Monitoring/methodsABSTRACT
Background: As the worldwide spread of coronavirus disease 2019 (COVID-19) continues for a long time, early prediction of the maximum severity is required for effective treatment of each patient. Objective: This study aimed to develop predictive models for the maximum severity of hospitalized COVID-19 patients using artificial intelligence (AI)/machine learning (ML) algorithms. Methods: The medical records of 2,263 COVID-19 patients admitted to 10 hospitals in Daegu, Korea, from February 18, 2020, to May 19, 2020, were comprehensively reviewed. The maximum severity during hospitalization was divided into four groups according to the severity level: mild, moderate, severe, and critical. The patient's initial hospitalization records were used as predictors. The total dataset was randomly split into a training set and a testing set in a 2:1 ratio, taking into account the four maximum severity groups. Predictive models were developed using the training set and were evaluated using the testing set. Two approaches were performed: using four groups based on original severity levels groups (i.e., 4-group classification) and using two groups after regrouping the four severity level into two (i.e., binary classification). Three variable selection methods including randomForestSRC were performed. As AI/ML algorithms for 4-group classification, GUIDE and proportional odds model were used. For binary classification, we used five AI/ML algorithms, including deep neural network and GUIDE. Results: Of the four maximum severity groups, the moderate group had the highest percentage (1,115 patients; 49.5%). As factors contributing to exacerbation of maximum severity, there were 25 statistically significant predictors through simple analysis of linear trends. As a result of model development, the following three models based on binary classification showed high predictive performance: (1) Mild vs. Above Moderate, (2) Below Moderate vs. Above Severe, and (3) Below Severe vs. Critical. The performance of these three binary models was evaluated using AUC values 0.883, 0.879, and, 0.887, respectively. Based on results for each of the three predictive models, we developed web-based nomograms for clinical use (http://statgen.snu.ac.kr/software/nomogramDaeguCovid/). Conclusions: We successfully developed web-based nomograms predicting the maximum severity. These nomograms are expected to help plan an effective treatment for each patient in the clinical field.
Subject(s)
COVID-19 , Humans , COVID-19/epidemiology , Artificial Intelligence , Hospitalization , Machine Learning , Neural Networks, ComputerABSTRACT
BACKGROUND: The exponential spread of coronavirus disease 2019 (COVID-19) causes unexpected economic burdens to worldwide health systems with severe shortages in hospital resources (beds, staff, equipment). Managing patients' length of stay (LOS) to optimize clinical care and utilization of hospital resources is very challenging. Projecting the future demand requires reliable prediction of patients' LOS, which can be beneficial for taking appropriate actions. Therefore, the purpose of this research is to develop and validate models using a multilayer perceptron-artificial neural network (MLP-ANN) algorithm based on the best training algorithm for predicting COVID-19 patients' hospital LOS. METHODS: Using a single-center registry, the records of 1225 laboratory-confirmed COVID-19 hospitalized cases from February 9, 2020 to December 20, 2020 were analyzed. In this study, first, the correlation coefficient technique was developed to determine the most significant variables as the input of the ANN models. Only variables with a correlation coefficient at a P-value < 0.2 were used in model construction. Then, the prediction models were developed based on 12 training algorithms according to full and selected feature datasets (90% of the training, with 10% used for model validation). Afterward, the root mean square error (RMSE) was used to assess the models' performance in order to select the best ANN training algorithm. Finally, a total of 343 patients were used for the external validation of the models. RESULTS: After implementing feature selection, a total of 20 variables were determined as the contributing factors to COVID-19 patients' LOS in order to build the models. The conducted experiments indicated that the best performance belongs to a neural network with 20 and 10 neurons in the hidden layer of the Bayesian regularization (BR) training algorithm for whole and selected features with an RMSE of 1.6213 and 2.2332, respectively. CONCLUSIONS: MLP-ANN-based models can reliably predict LOS in hospitalized patients with COVID-19 using readily available data at the time of admission. In this regard, the models developed in our study can help health systems to optimally allocate limited hospital resources and make informed evidence-based decisions.
Subject(s)
COVID-19 , Humans , Bayes Theorem , Neural Networks, Computer , Algorithms , Length of StayABSTRACT
As new coronavirus variants continue to emerge, in order to better address vaccine-related concerns and promote vaccine uptake in the next few years, the role played by online communities in shaping individuals' vaccine attitudes has become an important lesson for public health practitioners and policymakers to learn. Examining the mechanism that underpins the impact of participating in online communities on the attitude toward COVID-19 vaccines, this study adopted a two-stage hybrid structural equation modeling (SEM)-artificial neural networks (ANN) approach to analyze the survey responses from 1037 Reddit community members. Findings from SEM demonstrated that in leading up to positive COVID-19 vaccine attitudes, sense of online community mediates the positive effects of perceived emotional support and social media usage, and perceived social norm mediates the positive effect of sense of online community as well as the negative effect of political conservatism. Health self-efficacy plays a moderating role between perceived emotional support and perceived social norm of COVID-19 vaccination. Results from the ANN model showed that online community members' perceived social norm of COVID-19 vaccination acts as the most important predictor of positive COVID-19 vaccine attitudes. This study highlights the importance of harnessing online communities in designing COVID-related public health interventions and accelerating normative change in relation to vaccination.
Subject(s)
COVID-19 , Vaccines , Humans , COVID-19 Vaccines , Latent Class Analysis , COVID-19/prevention & control , Vaccination , Attitude , Neural Networks, ComputerABSTRACT
The COVID-19 pandemic has exposed the vulnerability of healthcare services worldwide, raising the need to develop novel tools to provide rapid and cost-effective screening and diagnosis. Clinical reports indicated that COVID-19 infection may cause cardiac injury, and electrocardiograms (ECG) may serve as a diagnostic biomarker for COVID-19. This study aims to utilize ECG signals to detect COVID-19 automatically. We propose a novel method to extract ECG signals from ECG paper records, which are then fed into one-dimensional convolution neural network (1D-CNN) to learn and diagnose the disease. To evaluate the quality of digitized signals, R peaks in the paper-based ECG images are labeled. Afterward, RR intervals calculated from each image are compared to RR intervals of the corresponding digitized signal. Experiments on the COVID-19 ECG images dataset demonstrate that the proposed digitization method is able to capture correctly the original signals, with a mean absolute error of 28.11 ms. The 1D-CNN model (SEResNet18), which is trained on the digitized ECG signals, allows to identify between individuals with COVID-19 and other subjects accurately, with classification accuracies of 98.42% and 98.50% for classifying COVID-19 vs. Normal and COVID-19 vs. other classes, respectively. Furthermore, the proposed method also achieves a high-level of performance for the multi-classification task. Our findings indicate that a deep learning system trained on digitized ECG signals can serve as a potential tool for diagnosing COVID-19.
Subject(s)
COVID-19 , Humans , COVID-19/diagnosis , Signal Processing, Computer-Assisted , Pandemics , Algorithms , Neural Networks, Computer , ElectrocardiographyABSTRACT
Convolutional Neural Network (CNN) has been employed in classifying the COVID cases from the lungs' CT-Scan with promising quantifying metrics. However, SARS COVID-19 has been mutated, and we have many versions of the virus B.1.1.7, B.1.135, and P.1, hence there is a need for a more robust architecture that will classify the COVID positive patients from COVID negative patients with less training. We have developed a neural network based on the number of channels present in the images. The CNN architecture is developed in accordance with the number of the channels present in the dataset and are extracting the features separately from the channels present in the CT-Scan dataset. In the tower architecture, the first tower is dedicated for only the first channel present in the image; the second CNN tower is dedicated to the first and second channel feature maps, and finally the third channel takes account of all the feature maps from all three channels. We have used two datasets viz. one from Tongji Hospital, Wuhan, China and another SARS-CoV-2 dataset to train and evaluate our CNN architecture. The proposed model brought about an average accuracy of 99.4%, F1 score 0.988, and AUC 0.99.
Subject(s)
COVID-19 , SARS-CoV-2 , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray ComputedABSTRACT
The global health crisis due to the fast spread of coronavirus disease (Covid-19) has caused great danger to all aspects of healthcare, economy, and other aspects. The highly infectious and insidious nature of the new coronavirus greatly increases the difficulty of outbreak prevention and control. The early and rapid detection of Covid-19 is an effective way to reduce the spread of Covid-19. However, detecting Covid-19 accurately and quickly in large populations remains to be a major challenge worldwide. In this study, A CNN-transformer fusion framework is proposed for the automatic classification of pneumonia on chest X-ray. This framework includes two parts: data processing and image classification. The data processing stage is to eliminate the differences between data from different medical institutions so that they have the same storage format; in the image classification stage, we use a multi-branch network with a custom convolution module and a transformer module, including feature extraction, feature focus, and feature classification sub-networks. Feature extraction subnetworks extract the shallow features of the image and interact with the information through the convolution and transformer modules. Both the local and global features are extracted by the convolution module and transformer module of feature-focus subnetworks, and are classified by the feature classification subnetworks. The proposed network could decide whether or not a patient has pneumonia, and differentiate between Covid-19 and bacterial pneumonia. This network was implemented on the collected benchmark datasets and the result shows that accuracy, precision, recall, and F1 score are 97.09%, 97.16%, 96.93%, and 97.04%, respectively. Our network was compared with other researchers' proposed methods and achieved better results in terms of accuracy, precision, and F1 score, proving that it is superior for Covid-19 detection. With further improvements to this network, we hope that it will provide doctors with an effective tool for diagnosing Covid-19.