ABSTRACT
To predict the mortality of patients with coronavirus disease 2019 (COVID-19). We collected clinical data of COVID-19 patients between January 18 and March 29 2020 in Wuhan, China . Gradient boosting decision tree (GBDT), logistic regression (LR) model, and simplified LR were built to predict the mortality of COVID-19. We also evaluated different models by computing area under curve (AUC), accuracy, positive predictive value (PPV), and negative predictive value (NPV) under fivefold cross-validation. A total of 2924 patients were included in our evaluation, with 257 (8.8%) died and 2667 (91.2%) survived during hospitalization. Upon admission, there were 21 (0.7%) mild cases, 2051 (70.1%) moderate case, 779 (26.6%) severe cases, and 73 (2.5%) critically severe cases. The GBDT model exhibited the highest fivefold AUC, which was 0.941, followed by LR (0.928) and LR-5 (0.913). The diagnostic accuracies of GBDT, LR, and LR-5 were 0.889, 0.868, and 0.887, respectively. In particular, the GBDT model demonstrated the highest sensitivity (0.899) and specificity (0.889). The NPV of all three models exceeded 97%, while their PPV values were relatively low, resulting in 0.381 for LR, 0.402 for LR-5, and 0.432 for GBDT. Regarding severe and critically severe cases, the GBDT model also performed the best with a fivefold AUC of 0.918. In the external validation test of the LR-5 model using 72 cases of COVID-19 from Brunei, leukomonocyte (%) turned to show the highest fivefold AUC (0.917), followed by urea (0.867), age (0.826), and SPO2 (0.704). The findings confirm that the mortality prediction performance of the GBDT is better than the LR models in confirmed cases of COVID-19. The performance comparison seems independent of disease severity. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at(10.1007/s00521-020-05592-1).
ABSTRACT
Recently, the COVID-19 pandemic has triggered different behaviors in education, especially during the lockdown, to contain the virus outbreak in the world. As a result, educational institutions worldwide are currently using online learning platforms to maintain their education presence. This research paper introduces and examines a dataset, E-LearningDJUST, that represents a sample of the student's study progress during the pandemic at Jordan University of Science and Technology (JUST). The dataset depicts a sample of the university's students as it includes 9,246 students from 11 faculties taking four courses in spring 2020, summer 2020, and fall 2021 semesters. To the best of our knowledge, it is the first collected dataset that reflects the students' study progress within a Jordanian institute using e-learning system records. One of this work's key findings is observing a high correlation between e-learning events and the final grades out of 100. Thus, the E-LearningDJUST dataset has been experimented with two robust machine learning models (Random Forest and XGBoost) and one simple deep learning model (Feed Forward Neural Network) to predict students' performances. Using RMSE as the primary evaluation criteria, the RMSE values range between 7 and 17. Among the other main findings, the application of feature selection with the random forest leads to better prediction results for all courses as the RMSE difference ranges between (0-0.20). Finally, a comparison study examined students' grades before and after the Coronavirus pandemic to understand how it impacted their grades. A high success rate has been observed during the pandemic compared to what it was before, and this is expected because the exams were online. However, the proportion of students with high marks remained similar to that of pre-pandemic courses.
ABSTRACT
In this study, we combine machine learning and geospatial interpolations to create a two-dimensional high-resolution ozone concentration fields over the South Coast Air Basin for the entire year of 2020. Three spatial interpolation methods (bicubic, IDW, and ordinary kriging) were employed. The predicted ozone concentration fields were constructed using 15 building sites, and random forest regression was employed to test predictability of 2020 data based on input data from past years. Spatially interpolated ozone concentrations were evaluated at twelve sites that were independent of the actual spatial interpolations to find the most suitable method for SoCAB. Ordinary kriging interpolation had the best performance overall for 2020: concentrations were overestimated for Anaheim, Compton, LA North Main Street, LAX, Rubidoux, and San Gabriel sites and underestimated for Banning, Glendora, Lake Elsinore, and Mira Loma sites. The model performance improved from the West to the East, exhibiting better predictions for inland sites. The model is best at interpolating ozone concentrations inside the sampling region (bounded by the building sites), with R2 ranging from 0.56 to 0.85 for those sites, as prediction deficiencies occurred at the periphery of the sampling region, with the lowest R2 of 0.39 for Winchester. All the interpolation methods poorly predicted and underestimated ozone concentrations in Crestline during summer (up to 19 ppb). Poor performance for Crestline indicates that the site has a distribution air pollution levels independent from all other sites. Therefore, historical data from coastal and inland sites should not be used to predict ozone in Crestline using data-driven spatial interpolation approaches. The study demonstrates the utility of machine learning and geospatial techniques for evaluating air pollution levels during anomalous periods.
ABSTRACT
Ninety-six million people are symptomatically infected with Dengue globally every year. Under the current standard-of-care, up to 20% of Dengue patients may be hospitalized, while only 500,000 develop Dengue Haemorrhagic Fever (DHF) and require hospitalization. This leads to unnecessary overwhelming of hospitals in tropical countries during large Dengue epidemics, especially when healthcare systems are grappling with large numbers of COVID-19 patients. Our research team set out to discover biomarkers to prognosticate Dengue patients, and augment the infectious disease clinician's decision-making process to hospitalize Dengue patients. Host biomarkers with concentrations significantly different between pooled serum samples of Dengue Fever (DF) patients and DHF patients were identified using protein array. The prognostication capabilities of selected biomarkers were then validated over 283 adult Dengue patients recruited from three Singapore tertiary hospitals, prior to the diagnosis of DHF. Three biomarkers (A2M, CMA1 and VEGFA) were identified that provide independent prognostication value from one another, and from clinical parameters commonly monitored in Dengue patients. The combination of all three biomarkers was able to identify from as early as Day 1 after the onset of fever, DF patients whose conditions will deteriorate into DHF. The biomarkers are robust and able to predict DHF well when trained on different AI/ML algorithms (logistic regression, support vector machine, decision tree, random forest, AdaBoost and gradient boosting). When stacked, prediction models based on the biomarkers were able to predict DHF with 97.3% sensitivity, 92.7% specificity, 66.7% PPV, 99.6% NPV and an AUC of 0.978. To the best of our knowledge, our panel of three biomarkers offers the highest accuracy in prognosticating Dengue to date. Further studies are required to validate the biomarkers in different geographical settings and pilot their implementation as part of the standard-of-care workflow for Dengue patients. [ FROM AUTHOR] Copyright of International Journal of Infectious Diseases is the property of Elsevier B.V. and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)
ABSTRACT
The effects of the pandemic can translate into a variety of physical and emotional reactions that are affecting the population, particularly the elderly Panamanian population, who have not been able to overcome the mainly emerging challenges of an infectious disease with health implications. physical and has also profoundly affected their well-being and mental health. To allow the Panamanian elderly population to improve emotional self-control and mental relaxation, we propose a software architecture for the development of a recommendation system integrating: artificial intelligence (AI), internet of things (IoT) and mobile applications. This research will contribute to the elderly population in Panama having a mobile application which is beneficial as a non-pharmaceutical alternative to cope with psychological conditions caused by the Covid-19 disease. Regarding the most relevant limitations we have are the acquisition of the data set for training. As future works, we hope to have a more robust architecture to implement it in other activities related to the heath self-control of Panamanian patients. © 2022, Associacao Iberica de Sistemas e Tecnologias de Informacao. All rights reserved.
ABSTRACT
The Public Health Commission of Hubei Province, China, at the end of 2019reported cases of severe and unknown pneumonia, marked by fever, malaise, dry cough, dyspnea, and respiratory failure, that occurred in the urban area of Wuhan, according to the World Health Organization (WHO). The lung infection, SARS-CoV-2, also known as COVID-19, was caused by a brand-new coronavirus (coronavirus disease 2019). Since then, infections have increased exponentially, and the WHO labeled the outbreak a worldwide emergency at the beginning of March 2020. Infected and asymptomatic individuals who can spread the virus are the main sources of it. The transmission occurs mainly by airthrough the air through the droplets, however indirect transmission is also possible, such as through contact with infected surfaces. It becomes essential to identify viral carriers as soon as possible in order to stop the spread of the disease and reduce morbidity and mortality. Imaging examinations, which are among the specific tests used to make the definite diagnosis, are crucial in the patient's management when COVID-19 is suspected. Numerous papers that use machine learning techniques discuss the use of X-ray chest radiographs as a component that aids in diagnosis and permits disease follow-up. The goal of this work is to supply the scientific community with information on the most widely used Machine Learning algorithms applied to chest X-ray images. © 2022 IEEE.
ABSTRACT
The World Health Organization (WHO) has declared the novel coronavirus as global pandemic on 11 March 2020. It was known to originate from Wuhan, China and its spread is unstoppable due to no proper medication and vaccine. The developed forecasting models predict the number of cases and its fatality rate for coronavirus disease 2019 (COVID-19), which is highly impulsive. This paper provides intrinsic algorithms namely - linear regression and long short-term memory (LSTM) using deep learning for time series-based prediction. It also uses the ReLU activation function and Adam optimiser. This paper also reports a comparative study on existing models for COVID-19 cases from different continents in the world. It also provides an extensive model that shows a brief prediction about the number of cases and time for recovered, active and deaths rate till January 2021.Copyright © 2023 Inderscience Enterprises Ltd.
ABSTRACT
The healthcare industry, as well as business and society, have been revolutionized by Artificial Intelligence (AI) and Machine Learning (ML). Currently, microbiology, biochemistry, genetics, structural biology, and immunological concepts have all seen significant advances. In contrast, the fields of bioinformatics have seen considerable expansion in order to handle this massive data influx. The field of bioinformatics, which tries to use computational methods for a better understanding of biological sciences, sits at the crossroads of data science and wet lab. Several innovative databases and computational techniques have been proposed in this sector to advance immunology research, with many of them relying on artificial intelligence and machine learning to anticipate complicated immune system activities, such as epitope identification for lymphocytes. Models based on machine learning skilled on specific proteins have provided inexpensive and quick-to-implement strategies for the discovery of effective viral treatments in the recent decade. Given a target biomolecule, these models can predict inhibitor candidates using structural data. The emergence of the coronavirus COVID-19 has resulted in significant network data traffic and resource optimization demands, rendering standard network designs incapable of dealing calmly with COVID-19's consequences. Researchers are encouraged by the use of Machine Learning (ML) and Artificial Intelligence (AI) in previous epidemics, which offers a novel strategy to combating the latest COVID-19 pandemic. © 2023 Scrivener Publishing LLC.
ABSTRACT
Machine learning extracts models from huge quantities of data. Models trained and validated over past data can be deployed in making forecasts as well as in classifying new incoming data. The real world which generates data may change over time, making the deployed model an obsolete one. To preserve the quality of the currently deployed model, continuous machine learning is required. Our approach retrospectively evaluates in an online fashion the behaviour of the currently deployed model. A drift detector detects any performance slump, and, in case, can replace the previous model with an up-to-date one. The approach experiments on a dataset of 8642 hematochemical examinations from hospitalized patients gathered over 6 months: the outcome of the model predicts the RT-PCR test result about CoViD-19. The method reached an area under the curve (AUC) of 0.794, 6% better than offline and 5% better than standard online-binary classification techniques. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org)
ABSTRACT
Wireless body area networks (WBANs) are helpful for remote health monitoring, especially during the COVID-19 pandemic. Due to the limited batteries of bio-sensors, energy-efficient routing is vital to achieve load-balancing and prolong the network's lifetime. Although many routing techniques have been presented for WBANs, they were designed for an application, and their performance may be degraded in other applications. In this paper, an ensemble Metaheuristic-Driven Machine Learning Routing Protocol (MDML-RP) is introduced as an adaptive real-time remote health monitoring in WBANs. The motivation behind this technique is to utilize the superior route optimization solutions offered by metaheuristics and to integrate them with the real-time routing capability of machine learning. The proposed method involves two phases: offline model tuning and online routing. During the offline pre-processing step, a metaheuristic algorithm based on the whale optimization algorithm (WOA) is used to optimize routes across various WBAN configurations. By applying WOA for multiple WBANs, a comprehensive dataset is generated. This dataset is then used to train and test a machine learning regressor that is based on support vector regression (SVR). Next, the optimized MDML-RP model is applied as an adaptive real-time protocol, which can efficiently respond to just-in-time requests in new, previously unseen WBANs. Simulation results in various WBANs demonstrate the superiority of the MDML-RP model in terms of application-specific performance measures when compared with the existing heuristic, metaheuristic, and machine learning protocols. The findings indicate that the proposed MDML-RP model achieves noteworthy improvement rates across various performance metrics when compared to the existing techniques, with an average improvement of 42.3% for the network lifetime, 15.4% for reliability, 31.3% for path loss, and 31.7% for hot-spot temperature.
ABSTRACT
The proceedings contain 35 papers. The topics discussed include: germline variants and prognostic factors for cutaneous melanoma in children and adolescents;association between polygenic risk score and multiple primary melanoma;Porocarcinoma: an epidemiological, clinical, and dermoscopic 20-year study;primary cutaneous melanoma and COVID-19: a hospital-based study;atypical spitz tumors: an epidemiological, clinical and dermoscopic multicenter study with 16 years of follow-up;pediatric melanoma: an epidemiological, clinical and dermoscopic multicenter study;recurrence-free survival prediction in melanoma patients by exploiting artificial intelligence techniques on melanoma whole slide images;ultra-high frequency ultrasound and machine learning approaches for the differential diagnosis of melanocytic lesions;and genetic determinants of response to therapy in a real-world setting of advanced/metastatic melanoma patients: whole-exome sequencing and CFDNA analysis.
ABSTRACT
Background: COVID-19 is an infectious disease caused by SARS-CoV-2. The symptoms of COVID-19 vary from mild-to-moderate respiratory illnesses, and it sometimes requires urgent medication. Therefore, it is crucial to detect COVID-19 at an early stage through specific clinical tests, testing kits, and medical devices. However, these tests are not always available during the time of the pandemic. Therefore, this study developed an automatic, intelligent, rapid, and real-time diagnostic model for the early detection of COVID-19 based on its symptoms. Methods: The COVID-19 knowledge graph (KG) constructed based on literature from heterogeneous data is imported to understand the COVID-19 different relations. We added human disease ontology to the COVID-19 KG and applied a node-embedding graph algorithm called fast random projection to extract an extra feature from the COVID-19 dataset. Subsequently, experiments were conducted using two machine learning (ML) pipelines to predict COVID-19 infection from its symptoms. Additionally, automatic tuning of the model hyperparameters was adopted. Results: We compared two graph-based ML models, logistic regression (LR) and random forest (RF) models. The proposed graph-based RF model achieved a small error rate = 0.0064 and the best scores on all performance metrics, including specificity = 98.71%, accuracy = 99.36%, precision = 99.65%, recall = 99.53%, and F1-score = 99.59%. Furthermore, the Matthews correlation coefficient achieved by the RF model was higher than that of the LR model. Comparative analysis with other ML algorithms and with studies from the literature showed that the proposed RF model exhibited the best detection accuracy. Conclusion: The graph-based RF model registered high performance in classifying the symptoms of COVID-19 infection, thereby indicating that the graph data science, in conjunction with ML techniques, helps improve performance and accelerate innovations © Copyright 2023 Alqaissi et al.
ABSTRACT
Patient safety has constituted a huge public health concern for a long period of time. The focus of safety in the healthcare context is around reducing preventable harms, such as medical errors and treatment-related injuries. COVID-19 pandemic, if anything, has act as a wake-up call for health experts to address latent safety problems. Advancements in the field of artificial intelligence have highlighted the use of intelligent systems as a proven means of improving patient safety and enhancing quality of care. This chapter explores trends in quality and safety research, the use of machine learning and natural language processing in the context of improving patient safety and outcomes, the use of patient safety databases as a source of data for machine learning, and the future of artificial intelligence in quality and safety. © Springer Nature Switzerland AG 2022.
ABSTRACT
PrefaceThe 16th International Conference on the Modelling of Casting, Welding, and Advanced Solidification Processes (MCWASP XVI) was held from June 18 to 23, 2023, in Banff, Canada, at the Banff Centre for Arts and Creativity. Founded in 1933, the Centre in Treaty 7 Territory within Banff National Park—Canada's first National Park—is a learning organization built upon an extraordinary legacy of excellence in artistic and creative development. The "all-inclusive” nature of the conference and the remote setting meant that participants dined, attended oral and poster presentations, and participated in social activities as a group, fostering outstanding opportunities for networking.Given that the MCWASP community had not met in person since 2015 in Japan (the 2020 edition of MCWASP was virtual owing to COVID-19), the 2023 conference provided the opportunity to renew old friendships and make new ones as well as discuss the science of solidification and related processes—all within the backdrop of the beautiful Canadian Rocky Mountains.The technical program comprised more than 70 oral and poster presentations. In addition to content related to modelling of casting, welding, and advanced solidification processes, keynotes were invited to talk about related subjects (artificial intelligence/machine learning, and permeability modelling in shale rock) as well as the rich diversity of fossils, especially dinosaurs, found in Alberta.The oral technical program was organized with as a single session (i.e., no concurrent presentations). It featured all aspects of solidification modelling, including solidification process technologies (continuous and semi-continuous casting, shape casting, additive manufacturing, and welding), coupled multi-physics simulations, defect formation, fluid flow, micro- and macro-structure formation, numerical methods, and related experimentation, especially in-situ observation of solidification.The four-day technical program was spread over five days to give participants the opportunity to explore the stunning Canadian Rocky Mountains.In these proceedings, the papers are organized by major theme. The dominant topics are Additive Manufacturing and Welding and Microstructure Formation, followed by Continuous Casting – Shape Casting, Heat Transfer and Fluid Flow, Alloy Segregation, Defects, Imaging of Solidification, Thermomechanics, and Materials Properties. In these themes, the authors report advances in numerical modelling techniques, new scientific and process developments in solidification, and related in-situ experimentation.Although significant progress has been made over these past 16 MCWASP conferences covering 43 years, it is clear that the complexity of advanced solidification phenomena as related to conventional and emerging manufacturing technologies still attracts a great deal of scientific and industrial interest to support technological innovation.André PhillionBanff, Canada, June 2023MCWASP XVI 2023List of Peer Reviewers, Sponsors, MCWASP XVI Organizers, International Scientific Committee are available in this Pdf.
ABSTRACT
Recent studies in machine learning have demonstrated the effectiveness of applying graph neural networks (GNNs) to single-cell RNA sequencing (scRNA-seq) data to predict COVID-19 disease states. In this study, we propose a graph attention capsule network (GACapNet) which extracts and fuses Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) transcriptomic patterns to improve node classification performance on cells and genes. Significantly different from the existing GNN approaches, we innovatively incorporate a capsule layer with dynamic routing into our model architecture to combine and fuse gene features effectively and to allow those more prominent gene features present in the output. We evaluate our GACapNet model on two scRNA-seq datasets, and the experimental results show that our GACapNet model significantly outperforms state-of-the-art baseline models. Therefore, our study demonstrates the capability of advanced machine learning models to generate predictive features and evolutionary patterns of the SARS-CoV-2 pathogen, and the applicability of closing knowledge gaps in the pathogenesis and recovery of COVID-19. © 2022 IEEE.
ABSTRACT
PurposeMost epidemic transmission forecasting methods can only provide deterministic outputs. This study aims to show that probabilistic forecasting, in contrast, is suitable for stochastic demand modeling and emergency medical resource planning under uncertainty.Design/methodology/approachTwo probabilistic forecasting methods, i.e. quantile regression convolutional neural network and kernel density estimation, are combined to provide the conditional quantiles and conditional densities of infected populations. The value of probabilistic forecasting in improving decision performances and controlling decision risks is investigated by an empirical study on the emergency medical resource planning for the COVID-19 pandemic.FindingsThe managerial implications obtained from the empirical results include (1) the optimization models using the conditional quantile or the point forecasting result obtain better results than those using the conditional density;(2) for sufficient resources, decision-makers' risk preferences can be incorporated to make tradeoffs between the possible surpluses and shortages of resources in the emergency medical resource planning at different quantile levels;and (3) for scarce resources, the differences in emergency medical resource planning at different quantile levels greatly decrease or disappear because of the existing of forecasting errors and supply quantity constraints.Originality/valueVery few studies concern probabilistic epidemic transmission forecasting methods, and this is the first attempt to incorporate deep learning methods into a two-phase framework for data-driven emergency medical resource planning under uncertainty. Moreover, the findings from the empirical results are valuable to select a suitable forecasting method and design an efficient emergency medical resource plan.
ABSTRACT
In recent years, the rapid development of artificial intelligence has enabled the launch of many new screening tools. This review aims to facilitate screening tool selection through a systematic narrative review and feature analysis. The current adoption rate of transparent tool reporting is low: by screening 191 studies published in the Review of Educational Research since 2015, we found that only eight studies reported screening tools. More research is needed to understand the reasons behind this phenomenon. After consulting various sources, 26 available screening tools in the market were found. Among them, we identified and evaluated 12 screening tools for educational reviewers and ranked them in descending order of feature score: Covidence (1), DistillerSR (2, tied), EPPI-Reviewer (2, tied), CADIMA (4), Swift-Active (5), Rayyan (6, tied), SysRev (6, tied), Abstrackr (8, tied), ReLiS (8, tied), RevMan (8, tied), ASReview (11), and Excel (12). In the discussion, we provide insights into the promise and bias in tools' machine learning algorithms. Our results encourage researchers to report their tool usage in publications and select tools based on suitability instead of convenience. © 2023 Taylor & Francis Group, LLC.
ABSTRACT
When COVID-19 reached Dr. Helmi Zakariah's home country of Malaysia in January 2020, he was consulting in Brazil as CEO of Artificial Intelligence in Medical Epidemiology (AIME) on Machine Learning application for dengue outbreak forecasting. A trained physician, public health professional, and digital health entrepreneur, Dr. Zakariah found himself in high demand as the Malaysian government began to mount it's COVID-19 response. He was asked to return home to his state of Selangor to lead the Digital Epidemiology portfolio for the Selangor State Task Force for COVID-19, and upon arrival immediately began to address the many challenges COVID-19 presented. This session will bring the audience along the sobering journey of health digitisation & adoption in the heat of the pandemic and beyond-not only what works, but more importantly-what doesn't-and to reflect on the case that the cost of underinvestment and inaction for digital innovation in health is simply too high in the face of another pandemic.Copyright © 2023
ABSTRACT
This project aims to devise an alternative for Coronavirus detection using various audio signals. The aim is to create a machine-learning model assisted by speech processing techniques that can be trained to distinguish symptomatic and asymptomatic Coronavirus cases. Here the features exclusive to the vocal cord of a person is used for covid detection. The procedure is to train the classifier using a data set containing data of people of various ages both infected and disease-free, including patients with comorbidities. We presented a machine learning-based Coronavirus classifier model that can separate Coronavirus positive or negative patients from cough, breathing, and speech recordings. The model was trained and evaluated using several machine learning classifiers such as Random Forest Classifier, Logistic Regression (LR), Decision Tree Classifier, k-nearest Neighbour (KNN), Naive Bayes Classifier, Linear Discriminant Analysis, and a neural network. This project helps track COVID-19 patients at a low cost using a non-contactable procedure and reduces the workload on testing centers. © 2023 IEEE.
ABSTRACT
To restrict the virus's transmission in the pandemic and lessen the strain on the healthcare industry, computer-assisted diagnostics for the accurate and speedy diagnosis of coronavirus illness (COVID-19) has become a prerequisite. Compared to other types of imaging and detection, chest X-ray imaging (CXR) provides several advantages. Healthcare practitioners may profit from any technology instrument providing quick and accurate COVID-19 infection detection. COVID-LiteNet is a technique suggested in this paper that combines white balance with Contrast Limited Adaptive Histogram Equalization (CLAHE) and a convolutional neural network (CNN). White balance is employed as an image pre-processing step in this approach, followed by CLAHE, to improve the visibility of CXR images, and CNN is trained using sparse categorical cross-entropy for image classification tasks and gives the smaller parameters file size, i.e., 2.24 MB. The suggested COVID-LiteNet technique produced better results than vanilla CNN with no pre-processing. The proposed approach outperformed several state-of-the-art methods with a binary classification accuracy of 98.44 percent and a multi-class classification accuracy of 97.50 percent. COVID-LiteNet, the suggested technique, outperformed the competition on various performance parameters. COVID-LiteNet may help radiologists discover COVID-19 patients from CXR pictures by providing thorough model interpretations, cutting diagnostic time significantly. © 2023 IEEE.