Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 37
Filtrar
1.
PeerJ Comput Sci ; 10: e2082, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38855257

RESUMO

Background: Breast cancer remains a pressing global health concern, necessitating accurate diagnostics for effective interventions. Deep learning models (AlexNet, ResNet-50, VGG16, GoogLeNet) show remarkable microcalcification identification (>90%). However, distinct architectures and methodologies pose challenges. We propose an ensemble model, merging unique perspectives, enhancing precision, and understanding critical factors for breast cancer intervention. Evaluation favors GoogleNet and ResNet-50, driving their selection for combined functionalities, ensuring improved precision, and dependability in microcalcification detection in clinical settings. Methods: This study presents a comprehensive mammogram preprocessing framework using an optimized deep learning ensemble approach. The proposed framework begins with artifact removal using Otsu Segmentation and morphological operation. Subsequent steps include image resizing, adaptive median filtering, and deep convolutional neural network (D-CNN) development via transfer learning with ResNet-50 model. Hyperparameters are optimized, and ensemble optimization (AlexNet, GoogLeNet, VGG16, ResNet-50) are constructed to identify the localized area of microcalcification. Rigorous evaluation protocol validates the efficacy of individual models, culminating in the ensemble model demonstrating superior predictive accuracy. Results: Based on our analysis, the proposed ensemble model exhibited exceptional performance in the classification of microcalcifications. This was evidenced by the model's average confidence score, which indicated a high degree of dependability and certainty in differentiating these critical characteristics. The proposed model demonstrated a noteworthy average confidence level of 0.9305 in the classification of microcalcification, outperforming alternative models and providing substantial insights into the dependability of the model. The average confidence of the ensemble model in classifying normal cases was 0.8859, which strengthened the model's consistent and dependable predictions. In addition, the ensemble models attained remarkably high performances in terms of accuracy, precision, recall, F1-score, and area under the curve (AUC). Conclusion: The proposed model's thorough dataset integration and focus on average confidence ratings within classes improve clinical diagnosis accuracy and effectiveness for breast cancer. This study introduces a novel methodology that takes advantage of an ensemble model and rigorous evaluation standards to substantially improve the accuracy and dependability of breast cancer diagnostics, specifically in the detection of microcalcifications.

2.
PLoS One ; 19(5): e0302196, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38820435

RESUMO

Web applications are important for various online businesses and operations because of their platform stability and low operation cost. The increasing usage of Internet-of-Things (IoT) devices within a network has contributed to the rise of network intrusion issues due to malicious Uniform Resource Locators (URLs). Generally, malicious URLs are initiated to promote scams, attacks, and frauds which can lead to high-risk intrusion. Several methods have been developed to detect malicious URLs in previous works. There has been a good amount of work done to detect malicious URLs using various methods such as random forest, regression, LightGBM, and more as reported in the literature. However, most of the previous works focused on the binary classification of malicious URLs and are tested on limited URL datasets. Nevertheless, the detection of malicious URLs remains a challenging task that remains open to research. Hence, this work proposed a stacking-based ensemble classifier to perform multi-class classification of malicious URLs on larger URL datasets to justify the robustness of the proposed method. This study focuses on obtaining lexical features directly from the URL to identify malicious websites. Then, the proposed stacking-based ensemble classifier is developed by integrating Random Forest, XGBoost, LightGBM, and CatBoost. In addition, hyperparameter tuning was performed using the Randomized Search method to optimize the proposed classifier. The proposed stacking-based ensemble classifier aims to take advantage of the performance of each machine learning model and aggregate the output to improve prediction accuracy. The classification accuracies of the machine learning model when applied individually are 93.6%, 95.2%, 95.7% and 94.8% for random forest, XGBoost, LightGBM, and CatBoost respectively. The proposed stacking-based ensemble classifier has shown significant results in classifying four classes of malicious URLs (phishing, malware, defacement, and benign) with an average accuracy of 96.8% when benchmarked with previous works.


Assuntos
Aprendizado de Máquina , Segurança Computacional , Internet das Coisas , Algoritmos
3.
Parasit Vectors ; 17(1): 188, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627870

RESUMO

BACKGROUND: Malaria is a serious public health concern worldwide. Early and accurate diagnosis is essential for controlling the disease's spread and avoiding severe health complications. Manual examination of blood smear samples by skilled technicians is a time-consuming aspect of the conventional malaria diagnosis toolbox. Malaria persists in many parts of the world, emphasising the urgent need for sophisticated and automated diagnostic instruments to expedite the identification of infected cells, thereby facilitating timely treatment and reducing the risk of disease transmission. This study aims to introduce a more lightweight and quicker model-but with improved accuracy-for diagnosing malaria using a YOLOv4 (You Only Look Once v. 4) deep learning object detector. METHODS: The YOLOv4 model is modified using direct layer pruning and backbone replacement. The primary objective of layer pruning is the removal and individual analysis of residual blocks within the C3, C4 and C5 (C3-C5) Res-block bodies of the backbone architecture's C3-C5 Res-block bodies. The CSP-DarkNet53 backbone is simultaneously replaced for enhanced feature extraction with a shallower ResNet50 network. The performance metrics of the models are compared and analysed. RESULTS: The modified models outperform the original YOLOv4 model. The YOLOv4-RC3_4 model with residual blocks pruned from the C3 and C4 Res-block body achieves the highest mean accuracy precision (mAP) of 90.70%. This mAP is > 9% higher than that of the original model, saving approximately 22% of the billion floating point operations (B-FLOPS) and 23 MB in size. The findings indicate that the YOLOv4-RC3_4 model also performs better, with an increase of 9.27% in detecting the infected cells upon pruning the redundant layers from the C3 Res-block bodies of the CSP-DarkeNet53 backbone. CONCLUSIONS: The results of this study highlight the use of the YOLOv4 model for detecting infected red blood cells. Pruning the residual blocks from the Res-block bodies helps to determine which Res-block bodies contribute the most and least, respectively, to the model's performance. Our method has the potential to revolutionise malaria diagnosis and pave the way for novel deep learning-based bioinformatics solutions. Developing an effective and automated process for diagnosing malaria will considerably contribute to global efforts to combat this debilitating disease. We have shown that removing undesirable residual blocks can reduce the size of the model and its computational complexity without compromising its precision.


Assuntos
Aprendizado Profundo , Recuperação Demorada da Anestesia , Malária , Animais , Benchmarking , Biologia Computacional , Malária/diagnóstico
4.
PeerJ Comput Sci ; 10: e1985, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38660193

RESUMO

Background: This study introduced a novel approach for predicting occupational injury severity by leveraging deep learning-based text classification techniques to analyze unstructured narratives. Unlike conventional methods that rely on structured data, our approach recognizes the richness of information within injury narrative descriptions with the aim of extracting valuable insights for improved occupational injury severity assessment. Methods: Natural language processing (NLP) techniques were harnessed to preprocess the occupational injury narratives obtained from the US Occupational Safety and Health Administration (OSHA) from January 2015 to June 2023. The methodology involved meticulous preprocessing of textual narratives to standardize text and eliminate noise, followed by the innovative integration of Term Frequency-Inverse Document Frequency (TF-IDF) and Global Vector (GloVe) word embeddings for effective text representation. The proposed predictive model adopts a novel Bidirectional Long Short-Term Memory (Bi-LSTM) architecture and is further refined through model optimization, including random search hyperparameters and in-depth feature importance analysis. The optimized Bi-LSTM model has been compared and validated against other machine learning classifiers which are naïve Bayes, support vector machine, random forest, decision trees, and K-nearest neighbor. Results: The proposed optimized Bi-LSTM models' superior predictability, boasted an accuracy of 0.95 for hospitalization and 0.98 for amputation cases with faster model processing times. Interestingly, the feature importance analysis revealed predictive keywords related to the causal factors of occupational injuries thereby providing valuable insights to enhance model interpretability. Conclusion: Our proposed optimized Bi-LSTM model offers safety and health practitioners an effective tool to empower workplace safety proactive measures, thereby contributing to business productivity and sustainability. This study lays the foundation for further exploration of predictive analytics in the occupational safety and health domain.

5.
PeerJ Comput Sci ; 10: e1943, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38686003

RESUMO

Background: Maintaining machines effectively continues to be a challenge for industrial organisations, which frequently employ reactive or premeditated methods. Recent research has begun to shift its attention towards the application of Predictive Maintenance (PdM) and Digital Twins (DT) principles in order to improve maintenance processes. PdM technologies have the capacity to significantly improve profitability, safety, and sustainability in various industries. Significantly, precise equipment estimation, enabled by robust supervised learning techniques, is critical to the efficacy of PdM in conjunction with DT development. This study underscores the application of PdM and DT, exploring its transformative potential across domains demanding real-time monitoring. Specifically, it delves into emerging fields in healthcare, utilities (smart water management), and agriculture (smart farm), aligning with the latest research frontiers in these areas. Methodology: Employing the Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) criteria, this study highlights diverse modeling techniques shaping asset lifetime evaluation within the PdM context from 34 scholarly articles. Results: The study revealed four important findings: various PdM and DT modelling techniques, their diverse approaches, predictive outcomes, and implementation of maintenance management. These findings align with the ongoing exploration of emerging applications in healthcare, utilities (smart water management), and agriculture (smart farm). In addition, it sheds light on the critical functions of PdM and DT, emphasising their extraordinary ability to drive revolutionary change in dynamic industrial challenges. The results highlight these methodologies' flexibility and application across many industries, providing vital insights into their potential to revolutionise asset management and maintenance practice for real-time monitoring. Conclusions: Therefore, this systematic review provides a current and essential resource for academics, practitioners, and policymakers to refine PdM strategies and expand the applicability of DT in diverse industrial sectors.

6.
J Magn Reson Imaging ; 59(4): 1242-1255, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37452574

RESUMO

BACKGROUND: Increased afterload in aortic stenosis (AS) induces left ventricle (LV) remodeling to preserve a normal ejection fraction. This compensatory response can become maladaptive and manifest with motion abnormality. It is a clinical challenge to identify contractile and relaxation dysfunction during early subclinical stage to prevent irreversible deterioration. PURPOSE: To evaluate the changes of regional wall dynamics in 3D + time domain as remodeling progresses in AS. STUDY TYPE: Retrospective. POPULATION: A total of 31 AS patients with reduced and preserved ejection fraction (14 AS_rEF: 7 male, 66.5 [7.8] years old; 17 AS_pEF: 12 male, 67.0 [6.0] years old) and 15 healthy (6 male, 61.0 [7.0] years old). FIELD STRENGTH/SEQUENCE: 1.5 T Magnetic resonance imaging/steady state free precession and late-gadolinium enhancement sequences. ASSESSMENT: Individual LV models were reconstructed in 3D + time domain and motion metrics including wall thickening (TI), dyssynchrony index (DI), contraction rate (CR), and relaxation rate (RR) were automatically extracted and associated with the presence of scarring and remodeling. STATISTICAL TESTS: Shapiro-Wilk: data normality; Kruskal-Wallis: significant difference (P < 0.05); ICC and CV: variability; Mann-Whitney: effect size. RESULTS: AS_rEF group shows distinct deterioration of cardiac motions compared to AS_pEF and healthy groups (TIAS_rEF : 0.92 [0.85] mm, TIAS_pEF : 5.13 [1.99] mm, TIhealthy : 3.61 [1.09] mm, ES: 0.48-0.83; DIAS_rEF : 17.11 [7.89]%, DIAS_pEF : 6.39 [4.04]%, DIhealthy : 5.71 [1.87]%, ES: 0.32-0.85; CRAS_rEF : 8.69 [6.11] mm/second, CRAS_pEF : 16.48 [6.70] mm/second, CRhealthy : 10.82 [4.57] mm/second, ES: 0.29-0.60; RRAS_rEF : 8.45 [4.84] mm/second; RRAS_pEF : 13.49 [8.56] mm/second, RRhealthy : 9.31 [2.48] mm/second, ES: 0.14-0.43). The difference in the motion metrics between healthy and AS_pEF groups were insignificant (P-value = 0.16-0.72). AS_rEF group was dominated by eccentric hypertrophy (47.1%) with concomitant scarring. Conversely, AS_pEF group was dominated by concentric remodeling and hypertrophy (71.4%), which could demonstrate hyperkinesia with slight wall dyssynchrony than healthy. Dysfunction of LV mechanics corresponded to the presence of myocardial scarring (54.9% in AS), which reverted the compensatory mechanisms initiated and performed by LV remodeling. DATA CONCLUSION: The proposed 3D + time modeling technique may distinguish regional motion abnormalities between AS_pEF, AS_rEF, and healthy cohorts, aiding clinical diagnosis and monitoring of AS progression. Subclinical myocardial dysfunction is evident in early AS despite of normal EF. LEVEL OF EVIDENCE: 4 TECHNICAL EFFICACY: Stage 1.


Assuntos
Estenose da Valva Aórtica , Meios de Contraste , Humanos , Masculino , Criança , Estudos Retrospectivos , Cicatriz , Gadolínio , Imageamento por Ressonância Magnética , Estenose da Valva Aórtica/diagnóstico por imagem , Hipertrofia , Função Ventricular Esquerda , Volume Sistólico , Remodelação Ventricular
7.
Environ Technol ; : 1-14, 2023 Nov 13.
Artigo em Inglês | MEDLINE | ID: mdl-37953730

RESUMO

Using natural deep eutectic solvents (NADESs) as a green reagent is a step toward producing environmentally friendly and sustainable technology. This study screened three natural DESs developed using quaternary ammonium salt and organic acid to analyse their capability to extract nickel ions from contaminated mangrove soil, which are ChCl: Acetic Acid (ChCl-AceA), ChCl: Levulinic Acid (ChCl-LevA), and ChCl: Ethylene Glycol(ChCl-Eg) at molar ratio 1:2. The impact of various operating parameters such as washing agent concentration, pH solution, and contact time on the NADES performance in the dissolution of Ni ions batch experiments were performed. The optimal soil washing conditions for metal removal were 30% and 15% concentration, a 1:5 soil-liquid ratio, and pH 2 of ChCl-LevA and ChCl-AceA, respectively. A single removal washing may remove 70.8% and 70.0% Ni ions from the contaminated soil. The dissolution kinetic of Ni ions extraction onto NADES was explained using the linear kinetic pseudo and intraparticle mass transfer diffusion models. The kinetic validation demonstrates a good fit between the experimental and pseudo-second-order Lagergren data. The model's maximum Ni dissolution capacity, Qe are 51.56 mg g-1 and 52.00 mg g-1 of ChCl-LevA and ChCl-AceA, respectively. The synthesised natural-based DES has the potential to be a cost-effective, efficient, green alternative extractant to conventional solvent extraction of heavy metals.

9.
PeerJ Comput Sci ; 9: e1306, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346549

RESUMO

Background: The environment has been significantly impacted by rapid urbanization, leading to a need for changes in climate change and pollution indicators. The 4IR offers a potential solution to efficiently manage these impacts. Smart city ecosystems can provide well-designed, sustainable, and safe cities that enable holistic climate change and global warming solutions through various community-centred initiatives. These include smart planning techniques, smart environment monitoring, and smart governance. An air quality intelligence platform, which operates as a complete measurement site for monitoring and governing air quality, has shown promising results in providing actionable insights. This article aims to highlight the potential of machine learning models in predicting air quality, providing data-driven strategic and sustainable solutions for smart cities. Methods: This study proposed an end-to-end air quality predictive model for smart city applications, utilizing four machine learning techniques and two deep learning techniques. These include Ada Boost, SVR, RF, KNN, MLP regressor and LSTM. The study was conducted in four different urban cities in Selangor, Malaysia, including Petaling Jaya, Banting, Klang, and Shah Alam. The model considered the air quality data of various pollution markers such as PM2.5, PM10, O3, and CO. Additionally, meteorological data including wind speed and wind direction were also considered, and their interactions with the pollutant markers were quantified. The study aimed to determine the correlation variance of the dependent variable in predicting air pollution and proposed a feature optimization process to reduce dimensionality and remove irrelevant features to enhance the prediction of PM2.5, improving the existing LSTM model. The study estimates the concentration of pollutants in the air based on training and highlights the contribution of feature optimization in air quality predictions through feature dimension reductions. Results: In this section, the results of predicting the concentration of pollutants (PM2.5, PM10, O3, and CO) in the air are presented in R2 and RMSE. In predicting the PM10 and PM2.5concentration, LSTM performed the best overall high R2values in the four study areas with the R2 values of 0.998, 0.995, 0.918, and 0.993 in Banting, Petaling, Klang and Shah Alam stations, respectively. The study indicated that among the studied pollution markers, PM2.5,PM10, NO2, wind speed and humidity are the most important elements to monitor. By reducing the number of features used in the model the proposed feature optimization process can make the model more interpretable and provide insights into the most critical factor affecting air quality. Findings from this study can aid policymakers in understanding the underlying causes of air pollution and develop more effective smart strategies for reducing pollution levels.

10.
PeerJ Comput Sci ; 9: e1279, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346641

RESUMO

Background: The advancement of biomedical research generates myriad healthcare-relevant data, including medical records and medical device maintenance information. The COVID-19 pandemic significantly affects the global mortality rate, creating an enormous demand for medical devices. As information technology has advanced, the concept of intelligent healthcare has steadily gained prominence. Smart healthcare utilises a new generation of information technologies, such as the Internet of Things (loT), big data, cloud computing, and artificial intelligence, to completely transform the traditional medical system. With the intention of presenting the concept of smart healthcare, a predictive model is proposed to predict medical device failure for intelligent management of healthcare services. Methods: Present healthcare device management can be improved by proposing a predictive machine learning model that prognosticates the tendency of medical device failures toward smart healthcare. The predictive model is developed based on 8,294 critical medical devices from 44 different types of equipment extracted from 15 healthcare facilities in Malaysia. The model classifies the device into three classes; (i) class 1, where the device is unlikely to fail within the first 3 years of purchase, (ii) class 2, where the device is likely to fail within 3 years from purchase date, and (iii) class 3 where the device is likely to fail more than 3 years after purchase. The goal is to establish a precise maintenance schedule and reduce maintenance and resource costs based on the time to the first failure event. A machine learning and deep learning technique were compared, and the best robust model for smart healthcare was proposed. Results: This study compares five algorithms in machine learning and three optimizers in deep learning techniques. The best optimized predictive model is based on ensemble classifier and SGDM optimizer, respectively. An ensemble classifier model produces 77.90%, 87.60%, and 75.39% for accuracy, specificity, and precision compared to 70.30%, 83.71%, and 67.15% for deep learning models. The ensemble classifier model improves to 79.50%, 88.36%, and 77.43% for accuracy, specificity, and precision after significant features are identified. The result concludes although machine learning has better accuracy than deep learning, more training time is required, which is 11.49 min instead of 1 min 5 s when deep learning is applied. The model accuracy shall be improved by introducing unstructured data from maintenance notes and is considered the author's future work because dealing with text data is time-consuming. The proposed model has proven to improve the devices' maintenance strategy with a Malaysian Ringgit (MYR) cost reduction of approximately MYR 326,330.88 per year. Therefore, the maintenance cost would drastically decrease if this smart predictive model is included in the healthcare management system.

11.
Front Bioeng Biotechnol ; 11: 1164655, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37122858

RESUMO

Knee osteoarthritis is one of the most common musculoskeletal diseases and is usually diagnosed with medical imaging techniques. Conventionally, case identification using plain radiography is practiced. However, we acknowledge that knee osteoarthritis is a 3D complexity; hence, magnetic resonance imaging will be the ideal modality to reveal the hidden osteoarthritis features from a three-dimensional view. In this work, the feasibility of well-known convolutional neural network (CNN) structures (ResNet, DenseNet, VGG, and AlexNet) to distinguish knees with and without osteoarthritis (OA) is investigated. Using 3D convolutional layers, we demonstrated the potential of 3D convolutional neural networks of 13 different architectures in knee osteoarthritis diagnosis. We used transfer learning by transforming 2D pre-trained weights into 3D as initial weights for the training of the 3D models. The performance of the models was compared and evaluated based on the performance metrics [balanced accuracy, precision, F1 score, and area under receiver operating characteristic (AUC) curve]. This study suggested that transfer learning indeed enhanced the performance of the models, especially for ResNet and DenseNet models. Transfer learning-based models presented promising results, with ResNet34 achieving the best overall accuracy of 0.875 and an F1 score of 0.871. The results also showed that shallow networks yielded better performance than deeper neural networks, demonstrated by ResNet18, DenseNet121, and VGG11 with AUC values of 0.945, 0.914, and 0.928, respectively. This encourages the application of clinical diagnostic aid for knee osteoarthritis using 3DCNN even in limited hardware conditions.

12.
J Cardiovasc Transl Res ; 16(5): 1110-1122, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37022611

RESUMO

Left ventricular adaptations can be a complex process under the influence of aortic stenosis (AS) and comorbidities. This study proposed and assessed the feasibility of using a motion-corrected personalized 3D + time LV modeling technique to evaluate the adaptive and maladaptive LV response to aid treatment decision-making. A total of 22 AS patients were analyzed and compared against 10 healthy subjects. The 3D + time analysis showed a highly distinct and personalized pattern of remodeling in individual AS patients which is associated with comorbidities and fibrosis. Patients with AS alone showed better wall thickening and synchrony than those comorbid with hypertension. Ischemic heart disease in AS caused impaired wall thickening and synchrony and systolic function. Apart from showing significant correlations to echocardiography and clinical MRI measurements (r: 0.70-0.95; p < 0.01), the proposed technique helped in detecting subclinical and subtle LV dysfunction, providing a better approach to evaluate AS patients for specific treatment, surgical planning, and follow-up recovery.


Assuntos
Estenose da Valva Aórtica , Disfunção Ventricular Esquerda , Humanos , Função Ventricular Esquerda/fisiologia , Ventrículos do Coração/diagnóstico por imagem , Imageamento por Ressonância Magnética , Ecocardiografia , Disfunção Ventricular Esquerda/diagnóstico por imagem , Disfunção Ventricular Esquerda/etiologia
13.
J Healthc Eng ; 2023: 3136511, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36860328

RESUMO

Medical device reliability is the ability of medical devices to endure functioning and is indispensable to ensure service delivery to patients. Preferred Reporting Items for Systematic Review and Meta-Analyses (PRISMA) technique was employed in May 2021 to evaluate existing reporting guidelines on medical device reliability. The systematic searching is conducted in eight different databases, including Web of Science, Science Direct, Scopus, IEEE Explorer, Emerald, MEDLINE Complete, Dimensions, and Springer Link, with 36 articles shortlisted from the year 2010 to May 2021. This study aims to epitomize existing literature on medical device reliability, scrutinize existing literature outcomes, investigate parameters affecting medical device reliability, and determine the scientific research gaps. The result of the systematic review listed three main topics on medical device reliability: risk management, performance prediction using Artificial Intelligence or machine learning, and management system. The medical device reliability assessment challenges are inadequate maintenance cost data, determining significant input parameter selection, difficulties accessing healthcare facilities, and limited age in service. Medical device systems are interconnected and interoperating, which increases complexity in assessing their reliability. To the best of our knowledge, although machine learning has become popular in predicting medical device performance, the existing models are only applicable to selected devices such as infant incubators, syringe pumps, and defibrillators. Despite the importance of medical device reliability assessment, there is no explicit protocol and predictive model to anticipate the situation. The problem worsens with the unavailability of a comprehensive assessment strategy for critical medical devices. Therefore, this study reviews the current state of critical device reliability in healthcare facilities. The present knowledge can be improved by adding new scientific data emphasis on critical medical devices used in healthcare services.


Assuntos
Inteligência Artificial , Serviços de Saúde , Lactente , Humanos , Reprodutibilidade dos Testes , Instalações de Saúde , Atenção à Saúde
14.
Comput Intell Neurosci ; 2023: 4208231, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36756163

RESUMO

Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.


Assuntos
Aprendizado Profundo , Cardiopatias , Humanos , Ventrículos do Coração/diagnóstico por imagem , Coração , Redes Neurais de Computação , Imageamento por Ressonância Magnética , Processamento de Imagem Assistida por Computador/métodos
15.
Artigo em Inglês | MEDLINE | ID: mdl-36360843

RESUMO

Forecasting the severity of occupational injuries shall be all industries' top priority. The use of machine learning is theoretically valuable to assist the predictive analysis, thus, this study attempts to propose a feature-optimized predictive model for anticipating occupational injury severity. A public database of 66,405 occupational injury records from OSHA is analyzed using five sets of machine learning models: Support Vector Machine, K-Nearest Neighbors, Naïve Bayes, Decision Tree, and Random Forest. For model comparison, Random Forest outperformed other models with higher accuracy and F1-score. Therefore, it highlighted the potential of ensemble learning as a more accurate prediction model in the field of occupational injury. In constructing the model, this study also proposed the feature optimization technique that revealed the three most important features; 'nature of injury', 'type of event', and 'affected body part' in developing model. The accuracy of the Random Forest model was improved by 0.5% or 0.895 and 0.954 for the prediction of hospitalization and amputation, respectively by redeveloping and optimizing the model with hyperparameter tuning. The feature optimization is essential in providing insight knowledge to the Safety and Health Practitioners for future injury corrective and preventive strategies. This study has shown promising potential for smart workplace surveillance.


Assuntos
Traumatismos Ocupacionais , Humanos , Traumatismos Ocupacionais/epidemiologia , Traumatismos Ocupacionais/prevenção & controle , Teorema de Bayes , Algoritmos , Local de Trabalho , Aprendizado de Máquina , Máquina de Vetores de Suporte
17.
Front Public Health ; 10: 984099, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36187621

RESUMO

Workplace accidents can cause a catastrophic loss to the company including human injuries and fatalities. Occupational injury reports may provide a detailed description of how the incidents occurred. Thus, the narrative is a useful information to extract, classify and analyze occupational injury. This study provides a systematic review of text mining and Natural Language Processing (NLP) applications to extract text narratives from occupational injury reports. A systematic search was conducted through multiple databases including Scopus, PubMed, and Science Direct. Only original studies that examined the application of machine and deep learning-based Natural Language Processing models for occupational injury analysis were incorporated in this study. A total of 27, out of 210 articles were reviewed in this study by adopting the Preferred Reporting Items for Systematic Review (PRISMA). This review highlighted that various machine and deep learning-based NLP models such as K-means, Naïve Bayes, Support Vector Machine, Decision Tree, and K-Nearest Neighbors were applied to predict occupational injury. On top of these models, deep neural networks are also included in classifying the type of accidents and identifying the causal factors. However, there is a paucity in using the deep learning models in extracting the occupational injury reports. This is due to these techniques are pretty much very recent and making inroads into decision-making in occupational safety and health as a whole. Despite that, this paper believed that there is a huge and promising potential to explore the application of NLP and text-based analytics in this occupational injury research field. Therefore, the improvement of data balancing techniques and the development of an automated decision-making support system for occupational injury by applying the deep learning-based NLP models are the recommendations given for future research.


Assuntos
Traumatismos Ocupacionais , Teorema de Bayes , Mineração de Dados/métodos , Humanos , Aprendizado de Máquina , Processamento de Linguagem Natural
18.
Front Public Health ; 10: 907280, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36033781

RESUMO

Due to urbanization, solid waste pollution is an increasing concern for rivers, possibly threatening human health, ecological integrity, and ecosystem services. Riverine management in urban landscapes requires best management practices since the river is a vital component in urban ecological civilization, and it is very imperative to synchronize the connection between urban development and river protection. Thus, the implementation of proper and innovative measures is vital to control garbage pollution in the rivers. A robot that cleans the waste autonomously can be a good solution to manage river pollution efficiently. Identifying and obtaining precise positions of garbage are the most crucial parts of the visual system for a cleaning robot. Computer vision has paved a way for computers to understand and interpret the surrounding objects. The development of an accurate computer vision system is a vital step toward a robotic platform since this is the front-end observation system before consequent manipulation and grasping systems. The scope of this work is to acquire visual information about floating garbage on the river, which is vital in building a robotic platform for river cleaning robots. In this paper, an automated detection system based on the improved You Only Look Once (YOLO) model is developed to detect floating garbage under various conditions, such as fluctuating illumination, complex background, and occlusion. The proposed object detection model has been shown to promote rapid convergence which improves the training time duration. In addition, the proposed object detection model has been shown to improve detection accuracy by strengthening the non-linear feature extraction process. The results showed that the proposed model achieved a mean average precision (mAP) value of 89%. Hence, the proposed model is considered feasible for identifying five classes of garbage, such as plastic bottles, aluminum cans, plastic bags, styrofoam, and plastic containers.


Assuntos
Ecossistema , Resíduos Sólidos , Humanos , Plásticos , Rios
19.
Front Public Health ; 10: 898254, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35677770

RESUMO

In this review, current studies on hospital readmission due to infection of COVID-19 were discussed, compared, and further evaluated in order to understand the current trends and progress in mitigation of hospital readmissions due to COVID-19. Boolean expression of ("COVID-19" OR "covid19" OR "covid" OR "coronavirus" OR "Sars-CoV-2") AND ("readmission" OR "re-admission" OR "rehospitalization" OR "rehospitalization") were used in five databases, namely Web of Science, Medline, Science Direct, Google Scholar and Scopus. From the search, a total of 253 articles were screened down to 26 articles. In overall, most of the research focus on readmission rates than mortality rate. On the readmission rate, the lowest is 4.2% by Ramos-Martínez et al. from Spain, and the highest is 19.9% by Donnelly et al. from the United States. Most of the research (n = 13) uses an inferential statistical approach in their studies, while only one uses a machine learning approach. The data size ranges from 79 to 126,137. However, there is no specific guide to set the most suitable data size for one research, and all results cannot be compared in terms of accuracy, as all research is regional studies and do not involve data from the multi region. The logistic regression is prevalent in the research on risk factors of readmission post-COVID-19 admission, despite each of the research coming out with different outcomes. From the word cloud, age is the most dominant risk factor of readmission, followed by diabetes, high length of stay, COPD, CKD, liver disease, metastatic disease, and CAD. A few future research directions has been proposed, including the utilization of machine learning in statistical analysis, investigation on dominant risk factors, experimental design on interventions to curb dominant risk factors and increase the scale of data collection from single centered to multi centered.


Assuntos
COVID-19 , Readmissão do Paciente , COVID-19/epidemiologia , Humanos , Modelos Logísticos , Aprendizado de Máquina , Fatores de Risco , Estados Unidos
20.
Front Public Health ; 10: 851553, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35664109

RESUMO

Environmental issues such as environmental pollutions and climate change are the impacts of globalization and become debatable issues among academics and industry key players. One of the environmental issues which is air pollution has been catching attention among industrialists, researchers, and communities around the world. However, it has always neglected until the impacts on human health become worse, and at times, irreversible. Human exposure to air pollutant such as particulate matters, sulfur dioxide, ozone and carbon monoxide contributed to adverse health hazards which result in respiratory diseases, cardiorespiratory diseases, cancers, and worst, can lead to death. This has led to a spike increase of hospitalization and emergency department visits especially at areas with worse pollution cases that seriously impacting human life and health. To address this alarming issue, a predictive model of air pollution is crucial in assessing the impacts of health due to air pollution. It is also critical in predicting the air quality index when assessing the risk contributed by air pollutant exposure. Hence, this systemic review explores the existing studies on anticipating air quality impact to human health using the advancement of Artificial Intelligence (AI). From the extensive review, we highlighted research gaps in this field that are worth to inquire. Our study proposes to develop an AI-based integrated environmental and health impact assessment system using federated learning. This is specifically aims to identify the association of health impact and pollution based on socio-economic activities and predict the Air Quality Index (AQI) for impact assessment. The output of the system will be utilized for hospitals and healthcare services management and planning. The proposed solution is expected to accommodate the needs of the critical and prioritization of sensitive group of publics during pollution seasons. Our finding will bring positive impacts to the society in terms of improved healthcare services quality, environmental and health sustainability. The findings are beneficial to local authorities either in healthcare or environmental monitoring institutions especially in the developing countries.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Poluentes Atmosféricos/efeitos adversos , Poluentes Atmosféricos/análise , Poluição do Ar/efeitos adversos , Poluição do Ar/análise , Inteligência Artificial , Avaliação do Impacto na Saúde , Humanos , Material Particulado/efeitos adversos , Material Particulado/análise
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...