Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 545
Filtrar
1.
Methods Mol Biol ; 2834: 373-391, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312175

RESUMO

Developmental toxicity is key human health endpoint, especially relevant for safeguarding maternal and child well-being. It is an object of increasing attention from international regulatory bodies such as the US EPA (US Environmental Protection Agency) and ECHA (European CHemicals Agency). In this challenging scenario, non-test methods employing explainable artificial intelligence based techniques can provide a significant help to derive transparent predictive models whose results can be easily interpreted to assess the developmental toxicity of new chemicals at very early stages. To accomplish this task, we have developed web platforms such as TIRESIA and TISBE.Based on a benchmark dataset, TIRESIA employs an explainable artificial intelligence approach combined with SHAP analysis to unveil the molecular features responsible for calculating the developmental toxicity. Descending from TIRESIA, TISBE employs a larger dataset, an explainable artificial intelligence framework based on a fragment-based fingerprint encoding, a consensus classifier, and a new double top-down applicability domain. We report here some practical examples for getting started with TIRESIA and TISBE.


Assuntos
Inteligência Artificial , Humanos , Internet , Animais , Testes de Toxicidade/métodos , Software
2.
Comput Biol Med ; 182: 109088, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39353296

RESUMO

Feature attribution methods can visually highlight specific input regions containing influential aspects affecting a deep learning model's prediction. Recently, the use of feature attribution methods in electrocardiogram (ECG) classification has been sharply increasing, as they assist clinicians in understanding the model's decision-making process and assessing the model's reliability. However, a careful study to identify suitable methods for ECG datasets has been lacking, leading researchers to select methods without a thorough understanding of their appropriateness. In this work, we conduct a large-scale assessment by considering eleven popular feature attribution methods across five large ECG datasets using a model based on the ResNet-18 architecture. Our experiments include both automatic evaluations and human evaluations. Annotated datasets were utilized for automatic evaluations and three cardiac experts were involved for human evaluations. We found that Guided Grad-CAM, particularly when its absolute values are utilized, achieves the best performance. When Guided Grad-CAM was utilized as the feature attribution method, cardiac experts confirmed that it can identify diagnostically relevant electrophysiological characteristics, although its effectiveness varied across the 17 different diagnoses that we have investigated.

3.
Sci Rep ; 14(1): 22797, 2024 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354009

RESUMO

Brain tumor, a leading cause of uncontrolled cell growth in the central nervous system, presents substantial challenges in medical diagnosis and treatment. Early and accurate detection is essential for effective intervention. This study aims to enhance the detection and classification of brain tumors in Magnetic Resonance Imaging (MRI) scans using an innovative framework combining Vision Transformer (ViT) and Gated Recurrent Unit (GRU) models. We utilized primary MRI data from Bangabandhu Sheikh Mujib Medical College Hospital (BSMMCH) in Faridpur, Bangladesh. Our hybrid ViT-GRU model extracts essential features via ViT and identifies relationships between these features using GRU, addressing class imbalance and outperforming existing diagnostic methods. We extensively processed the dataset, and then trained the model using various optimizers (SGD, Adam, AdamW) and evaluated through rigorous 10-fold cross-validation. Additionally, we incorporated Explainable Artificial Intelligence (XAI) techniques-Attention Map, SHAP, and LIME-to enhance the interpretability of the model's predictions. For the primary dataset BrTMHD-2023, the ViT-GRU model achieved precision, recall, and F1-score metrics of 97%. The highest accuracies obtained with SGD, Adam, and AdamW optimizers were 81.66%, 96.56%, and 98.97%, respectively. Our model outperformed existing Transfer Learning models by 1.26%, as validated through comparative analysis and cross-validation. The proposed model also shows excellent performances with another Brain Tumor Kaggle Dataset outperforming the existing research done on the same dataset with 96.08% accuracy. The proposed ViT-GRU framework significantly improves the detection and classification of brain tumors in MRI scans. The integration of XAI techniques enhances the model's transparency and reliability, fostering trust among clinicians and facilitating clinical application. Future work will expand the dataset and apply findings to real-time diagnostic devices, advancing the field.


Assuntos
Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Humanos , Bangladesh , Imageamento por Ressonância Magnética/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/classificação , Neoplasias Encefálicas/patologia , Inteligência Artificial , Algoritmos , Interpretação de Imagem Assistida por Computador/métodos
4.
Artif Intell Med ; 157: 102985, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39383708

RESUMO

Developing technology to assist medical experts in their everyday decision-making is currently a hot topic in the field of Artificial Intelligence (AI). This is specially true within the framework of Evidence-Based Medicine (EBM), where the aim is to facilitate the extraction of relevant information using natural language as a tool for mediating in human-AI interaction. In this context, AI techniques can be beneficial in finding arguments for past decisions in evolution notes or patient journeys, especially when different doctors are involved in a patient's care. In those documents the decision-making process towards treating the patient is reported. Thus, applying Natural Language Processing (NLP) techniques has the potential to assist doctors in extracting arguments for a more comprehensive understanding of the decisions made. This work focuses on the explanatory argument identification step by setting up the task in a Question Answering (QA) scenario in which clinicians ask questions to the AI model to assist them in identifying those arguments. In order to explore the capabilities of current AI-based language models, we present a new dataset which, unlike previous work: (i) includes not only explanatory arguments for the correct hypothesis, but also arguments to reason on the incorrectness of other hypotheses; (ii) the explanations are written originally in Spanish by doctors to reason over cases from the Spanish Residency Medical Exams. Furthermore, this new benchmark allows us to set up a novel extractive task by identifying the explanation written by medical doctors that supports the correct answer within an argumentative text. An additional benefit of our approach lies in its ability to evaluate the extractive performance of language models using automatic metrics, which in the Antidote CasiMedicos dataset corresponds to a 74.47 F1 score. Comprehensive experimentation shows that our novel dataset and approach is an effective technique to help practitioners in identifying relevant evidence-based explanations for medical questions.

5.
J Neurooncol ; 2024 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-39392590

RESUMO

PURPOSE: Vestibular schwannomas (VSs) represent the most common cerebellopontine angle tumors, posing a challenge in preserving facial nerve (FN) function during surgery. We employed the Extreme Gradient Boosting machine learning classifier to predict long-term FN outcomes (classified as House-Brackmann grades 1-2 for good outcomes and 3-6 for bad outcomes) after VS surgery. METHODS: In a retrospective analysis of 256 patients, comprehensive pre-, intra-, and post-operative factors were examined. We applied the machine learning (ML) classifier Extreme Gradient Boosting (XGBoost) for the following binary classification: long-term good and bad FN outcome after VS surgery To enhance the interpretability of our model, we utilized an explainable artificial intelligence approach. RESULTS: Short-term FN function (tau = 0.6) correlated with long-term FN function. The model exhibited an average accuracy of 0.83, a ROC AUC score of 0.91, and Matthew's correlation coefficient score of 0.62. The most influential feature, identified through SHapley Additive exPlanations (SHAP), was short-term FN function. Conversely, large tumor volume and absence of preoperative auditory brainstem responses were associated with unfavorable outcomes. CONCLUSIONS: We introduce an effective ML model for classifying long-term FN outcomes following VS surgery. Short-term FN function was identified as the key predictor of long-term function. This model's excellent ability to differentiate bad and good outcomes makes it useful for evaluating patients and providing recommendations regarding FN dysfunction management.

6.
Front Artif Intell ; 7: 1381921, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39372662

RESUMO

Time series classification is a challenging research area where machine learning and deep learning techniques have shown remarkable performance. However, often, these are seen as black boxes due to their minimal interpretability. On the one hand, there is a plethora of eXplainable AI (XAI) methods designed to elucidate the functioning of models trained on image and tabular data. On the other hand, adapting these methods to explain deep learning-based time series classifiers may not be straightforward due to the temporal nature of time series data. This research proposes a novel global post-hoc explainable method for unearthing the key time steps behind the inferences made by deep learning-based time series classifiers. This novel approach generates a decision tree graph, a specific set of rules, that can be seen as explanations, potentially enhancing interpretability. The methodology involves two major phases: (1) training and evaluating deep-learning-based time series classification models, and (2) extracting parameterized primitive events, such as increasing, decreasing, local max and local min, from each instance of the evaluation set and clustering such events to extract prototypical ones. These prototypical primitive events are then used as input to a decision-tree classifier trained to fit the model predictions of the test set rather than the ground truth data. Experiments were conducted on diverse real-world datasets sourced from the UCR archive, employing metrics such as accuracy, fidelity, robustness, number of nodes, and depth of the extracted rules. The findings indicate that this global post-hoc method can improve the global interpretability of complex time series classification models.

7.
Water Environ Res ; 96(10): e11140, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39382139

RESUMO

Chlorophyll-a (Chl-a) concentrations, a key indicator of algal blooms, were estimated using the XGBoost machine learning model with 23 variables, including water quality and meteorological factors. The model performance was evaluated using three indices: root mean square error (RMSE), RMSE-observation standard deviation ratio (RSR), and Nash-Sutcliffe efficiency. Nine datasets were created by averaging 1 hour data to cover time frequencies ranging from 1 hour to 1 month. The dataset with relatively high observation frequencies (1-24 h) maintained stability, with an RSR ranging between 0.61 and 0.65. However, the model's performance declined significantly for datasets with weekly and monthly intervals. The Shapley value (SHAP) analysis, an explainable artificial intelligence method, was further applied to provide a quantitative understanding of how environmental factors in the watershed impact the model's performance and is also utilized to enhance the practical applicability of the model in the field. The number of input variables for model construction increased sequentially from 1 to 23, starting from the variable with the highest SHAP value to that with the lowest. The model's performance plateaued after considering five or more variables, demonstrating that stable performance could be achieved using only a small number of variables, including relatively easily measured data collected by real-time sensors, such as pH, dissolved oxygen, and turbidity. This result highlights the practicality of employing machine learning models and real-time sensor-based measurements for effective on-site water quality management. PRACTITIONER POINTS: XAI quantifies the effects of environmental factors on algal bloom prediction models The effects of input variable frequency and seasonality were analyzed using XAI XAI analysis on key variables ensures cost-effective model development.


Assuntos
Inteligência Artificial , Eutrofização , Monitoramento Ambiental/métodos , Aprendizado de Máquina , Clorofila A , Modelos Teóricos , Qualidade da Água
8.
Sci Rep ; 14(1): 23735, 2024 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-39390208

RESUMO

This study develops explainable artificial intelligence for predicting safe balance using hospital data, including clinical, neurophysiological, and diffusion tensor imaging properties. Retrospective data from 92 first-time stroke patients from January 2016 to June 2023 was analysed. The dependent variables were independent mobility scores, i.e., Berg Balance Scales with 0 (45 or below) vs. 1 (above 45) measured after three and six months, respectively. Twenty-nine predictors were included. Random forest variable importance was employed for identifying significant predictors of the Berg Balance Scale and testing its associations with the predictors, including Berg Balance Scale after one month and corticospinal tract diffusion tensor imaging properties. Shapley Additive Explanation values were calculated to analyse the directions of these associations. The random forest registered a higher or similar area under the curve compared to logistic regression, i.e., 91% vs. 87% (Berg Balance Scale after three months), 92% vs. 92% (Berg Balance Scale after six months). Based on random forest variable importance values and rankings: (1) Berg Balance Scale after three months has strong associations with Berg Balance Scale after one month, Fugl-Meyer assessment scale, ipsilesional corticospinal tract fractional anisotropy, fractional anisotropy laterality index and age; (2) Berg Balance Scale after six months has strong relationships with Fugl-Meyer assessment scale, Berg Balance Scale after one month, ankle plantar flexion muscle strength, knee extension muscle strength and hip flexion muscle strength. These associations were positive in the SHAP summary plots. Including Berg Balance Scale after one month, Fugl-Meyer assessment scale or ipsilesional corticospinal tract fractional anisotropy in the random forest will increase the probability of Berg Balance Scale after three months being above 45 by 0.11, 0.08, or 0.08. In conclusion, safe balance after stroke strongly correlates with its initial motor function, Fugl-Meyer assessment scale, and ipsilesional corticospinal tract fractional anisotropy. Diffusion tensor imaging information aids in developing explainable artificial intelligence for predicting safe balance after stroke.


Assuntos
Inteligência Artificial , Imagem de Tensor de Difusão , Equilíbrio Postural , Acidente Vascular Cerebral , Humanos , Feminino , Masculino , Equilíbrio Postural/fisiologia , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/fisiopatologia , Pessoa de Meia-Idade , Idoso , Imagem de Tensor de Difusão/métodos , Estudos Retrospectivos , Reabilitação do Acidente Vascular Cerebral/métodos
9.
Front Med (Lausanne) ; 11: 1438720, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39328315

RESUMO

Emotional recognition is a way of detecting, evaluating, interpreting, and responding to others' emotional states and feelings, which might range from delight to fear to disgrace. There is increasing interest in the domains of psychological computing and human-computer interface (HCI), especially Emotion Recognition (ER) in Virtual Reality (VR). Human emotions and mental states are effectively captured using Electroencephalography (EEG), and there has been a growing need for analysis in VR situations. In this study, we investigated emotion recognition in a VR environment using explainable machine learning and deep learning techniques. Specifically, we employed Support Vector Classifiers (SVC), K-Nearest Neighbors (KNN), Logistic Regression (LR), Deep Neural Networks (DNN), DNN with flattened layer, Bi-directional Long-short Term Memory (Bi-LSTM), and Attention LSTM. This research utilized an effective multimodal dataset named VREED (VR Eyes: Emotions Dataset) for emotion recognition. The dataset was first reduced to binary and multi-class categories. We then processed the dataset to handle missing values and applied normalization techniques to enhance data consistency. Subsequently, explainable Machine Learning (ML) and Deep Learning (DL) classifiers were employed to predict emotions in VR. Experimental analysis and results indicate that the Attention LSTM model excelled in binary classification, while both DNN and Attention LSTM achieved outstanding performance in multi-class classification, with up to 99.99% accuracy. These findings underscore the efficacy of integrating VR with advanced, explainable ML and DL methods for emotion recognition.

10.
Ibrain ; 10(3): 245-265, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39346792

RESUMO

The rapid advancement of artificial intelligence (AI) has sparked renewed discussions on its trustworthiness and the concept of eXplainable AI (XAI). Recent research in neuroscience has emphasized the relevance of XAI in studying cognition. This scoping review aims to identify and analyze various XAI methods used to study the mechanisms and features of cognitive function and dysfunction. In this study, the collected evidence is qualitatively assessed to develop an effective framework for approaching XAI in cognitive neuroscience. Based on the Joanna Briggs Institute and preferred reporting items for systematic reviews and meta-analyses extension for scoping review guidelines, we searched for peer-reviewed articles on MEDLINE, Embase, Web of Science, Cochrane Central Register of Controlled Trials, and Google Scholar. Two reviewers performed data screening, extraction, and thematic analysis in parallel. Twelve eligible experimental studies published in the past decade were included. The results showed that the majority (75%) focused on normal cognitive functions such as perception, social cognition, language, executive function, and memory, while others (25%) examined impaired cognition. The predominant XAI methods employed were intrinsic XAI (58.3%), followed by attribution-based (41.7%) and example-based (8.3%) post hoc methods. Explainability was applied at a local (66.7%) or global (33.3%) scope. The findings, predominantly correlational, were anatomical (83.3%) or nonanatomical (16.7%). In conclusion, while these XAI techniques were lauded for their predictive power, robustness, testability, and plausibility, limitations included oversimplification, confounding factors, and inconsistencies. The reviewed studies showcased the potential of XAI models while acknowledging current challenges in causality and oversimplification, particularly emphasizing the need for reproducibility.

11.
Front Mol Biosci ; 11: 1429281, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39314212

RESUMO

The COVID-19 pandemic, caused by SARS-CoV-2, has led to significant challenges worldwide, including diverse clinical outcomes and prolonged post-recovery symptoms known as Long COVID or Post-COVID-19 syndrome. Emerging evidence suggests a crucial role of metabolic reprogramming in the infection's long-term consequences. This study employs a novel approach utilizing machine learning (ML) and explainable artificial intelligence (XAI) to analyze metabolic alterations in COVID-19 and Post-COVID-19 patients. Samples were taken from a cohort of 142 COVID-19, 48 Post-COVID-19, and 38 control patients, comprising 111 identified metabolites. Traditional analysis methods, like PCA and PLS-DA, were compared with ML techniques, particularly eXtreme Gradient Boosting (XGBoost) enhanced by SHAP (SHapley Additive exPlanations) values for explainability. XGBoost, combined with SHAP, outperformed traditional methods, demonstrating superior predictive performance and providing new insights into the metabolic basis of the disease's progression and aftermath. The analysis revealed metabolomic subgroups within the COVID-19 and Post-COVID-19 conditions, suggesting heterogeneous metabolic responses to the infection and its long-term impacts. Key metabolic signatures in Post-COVID-19 include taurine, glutamine, alpha-Ketoglutaric acid, and LysoPC a C16:0. This study highlights the potential of integrating ML and XAI for a fine-grained description in metabolomics research, offering a more detailed understanding of metabolic anomalies in COVID-19 and Post-COVID-19 conditions.

12.
Comput Med Imaging Graph ; 117: 102433, 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39276433

RESUMO

Oral squamous cell carcinoma recognition presents a challenge due to late diagnosis and costly data acquisition. A cost-efficient, computerized screening system is crucial for early disease detection, minimizing the need for expert intervention and expensive analysis. Besides, transparency is essential to align these systems with critical sector applications. Explainable Artificial Intelligence (XAI) provides techniques for understanding models. However, current XAI is mostly data-driven and focused on addressing developers' requirements of improving models rather than clinical users' demands for expressing relevant insights. Among different XAI strategies, we propose a solution composed of Case-Based Reasoning paradigm to provide visual output explanations and Informed Deep Learning (IDL) to integrate medical knowledge within the system. A key aspect of our solution lies in its capability to handle data imperfections, including labeling inaccuracies and artifacts, thanks to an ensemble architecture on top of the deep learning (DL) workflow. We conducted several experimental benchmarks on a dataset collected in collaboration with medical centers. Our findings reveal that employing the IDL approach yields an accuracy of 85%, surpassing the 77% accuracy achieved by DL alone. Furthermore, we measured the human-centered explainability of the two approaches and IDL generates explanations more congruent with the clinical user demands.

13.
Biosens Bioelectron ; 267: 116773, 2024 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-39277920

RESUMO

Prostate Imaging Reporting and Data System (PI-RADS) score, a reporting system of prostate MRI cases, has become a standard prostate cancer (PCa) screening method due to exceptional diagnosis performance. However, PI-RADS 3 lesions are an unmet medical need because PI-RADS provides diagnosis accuracy of only 30-40% at most, accompanied by a high false-positive rate. Here, we propose an explainable artificial intelligence (XAI) based PCa screening system integrating a highly sensitive dual-gate field-effect transistor (DGFET) based multi-marker biosensor for ambiguous lesions identification. This system produces interpretable results by analyzing sensing patterns of three urinary exosomal biomarkers, providing a possibility of an evidence-based prediction from clinicians. In our results, XAI-based PCa screening system showed a high accuracy with an AUC of 0.93 using 102 blinded samples with the non-invasive method. Remarkably, the PCa diagnosis accuracy of patients with PI-RADS 3 was more than twice that of conventional PI-RADS scoring. Our system also provided a reasonable explanation of its decision that TMEM256 biomarker is the leading factor for screening those with PI-RADS 3. Our study implies that XAI can facilitate informed decisions, guided by insights into the significance of visualized multi-biomarkers and clinical factors. The XAI-based sensor system can assist healthcare professionals in providing practical and evidence-based PCa diagnoses.

14.
JAMIA Open ; 7(3): ooae074, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39282081

RESUMO

Objective: This study aimed to investigate the predictive capabilities of historical patient records to predict patient adverse outcomes such as mortality, readmission, and prolonged length of stay (PLOS). Methods: Leveraging a de-identified dataset from a tertiary care university hospital, we developed an eXplainable Artificial Intelligence (XAI) framework combining tree-based and traditional machine learning (ML) models with interpretations and statistical analysis of predictors of mortality, readmission, and PLOS. Results: Our framework demonstrated exceptional predictive performance with a notable area under the receiver operating characteristic (AUROC) of 0.9625 and an area under the precision-recall curve (AUPRC) of 0.8575 for 30-day mortality at discharge and an AUROC of 0.9545 and AUPRC of 0.8419 at admission. For the readmission and PLOS risk, the highest AUROC achieved were 0.8198 and 0.9797, respectively. The tree-based models consistently outperformed the traditional ML models in all 4 prediction tasks. The key predictors were age, derived temporal features, routine laboratory tests, and diagnostic and procedural codes. Conclusion: The study underscores the potential of leveraging medical history for enhanced hospital predictive analytics. We present an accurate and intuitive framework for early warning models that can be easily implemented in the current and developing digital health platforms to predict adverse outcomes accurately.

15.
Front Artif Intell ; 7: 1456069, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39286548

RESUMO

Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.

16.
Br J Educ Psychol ; 2024 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-39307843

RESUMO

BACKGROUND: Given that students from socio-economically disadvantaged family backgrounds are more likely to suffer from low academic performance, there is an interest in identifying features of academic resilience, which may mitigate the relationship between disadvantaged socio-economic status and academic performance. AIMS: This study sought to combine machine learning and explainable artificial intelligence (XAI) technique to identify key features of academic resilience in mathematics learning during COVID-19. MATERIALS AND METHODS: Based on PISA 2022 data in 79 countries/economies, the random forest model coupled with Shapley additive explanations (SHAP) value technique not only uncovered the key features of academic resilience but also examined the contributions of each key feature. RESULTS: Findings indicated that 35 features were identified in the classification of academically resilient and non-academically resilient students, which largely validated the previous academic resilient framework. Notably, gender differences were shown in the distribution of some key features. Research findings also indicated that resilient students tended to have a stable emotional state, high levels of self-efficacy, low levels of truancy and positive future aspirations. DISCUSSION: This study has established a research paradigm essentially methodological in nature to bridge the gap between psychological theories and big data in the field of educational psychology. CONCLUSION: To sum up, our study shed light on the issues of education equity and quality from a global perspective in the times of the COVID-19 pandemic.

17.
J Environ Manage ; 370: 122361, 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39255573

RESUMO

This research aims to use the power of geospatial artificial intelligence (GeoAI), employing the categorical boosting (CatBoost) machine learning model in conjunction with two metaheuristic algorithms, the firefly algorithm (CatBoost-FA) and the fruit fly optimization algorithm (CatBoost-FOA), to spatially assess and map noise pollution prone areas in Tehran city, Iran. To spatially model areas susceptible to noise pollution, we established a comprehensive spatial database encompassing data for the annual average Leq (equivalent continuous sound level) from 2019 to 2022. This database was enriched with critical spatial criteria influencing noise pollution, including urban land use, traffic volume, population density, and normalized difference vegetation index (NDVI). Our study evaluated the predictive accuracy of these models using key performance metrics, including root mean square error (RMSE), mean absolute error (MAE), and receiver operating characteristic (ROC) indices. The results demonstrated the superior performance of the CatBoost-FA algorithm, with RMSE and MAE values of 0.159 and 0.114 for the training data and 0.437 and 0.371 for the test data, outperforming both the CatBoost-FOA and CatBoost models. ROC analysis further confirmed the efficacy of the models, achieving an accuracy of 0.897, CatBoost-FOA with an accuracy of 0.871, and CatBoost with an accuracy of 0.846, highlighting their robust modeling capabilities. Additionally, we employed an explainable artificial intelligence (XAI) approach, utilizing the SHAP (Shapley Additive Explanations) method to interpret the underlying mechanisms of our models. The SHAP results revealed the significant influence of various factors on noise-pollution-prone areas, with airport, commercial, and administrative zones emerging as pivotal contributors.

18.
Artigo em Inglês | MEDLINE | ID: mdl-39269692

RESUMO

The brain-computer interface (BCI) systems based on motor imagery typically rely on a large number of electrode channels to acquire information. The rational selection of electroencephalography (EEG) channel combinations is crucial for optimizing computational efficiency and enhancing practical applicability. However, evaluating all potential channel combinations individually is impractical. This study aims to explore a strategy for quickly achieving a balance between maximizing channel reduction and minimizing precision loss. To this end, we developed a spatio-temporal attention perception network named STAPNet. Based on the channel contributions adaptively generated by its subnetwork, we propose an extended step bi-directional search strategy that includes variable ratio channel selection (VRCS) and strided greedy channel selection (SGCS), designed to enhance global search capabilities and accelerate the optimization process. Experimental results show that on the High Gamma and BCI Competition IV 2a public datasets, the framework respectively achieved average maximum accuracies of 91.47% and 84.17%. Under conditions of zero precision loss, the average number of channels was reduced by a maximum of 87.5%. Additionally, to investigate the impact of neural information loss due to channel reduction on the interpretation of complex brain functions, we employed a heatmap visualization algorithm to verify the universal importance and complete symmetry of the selected optimal channel combination across multiple datasets. This is consistent with the brain's cooperative mechanism when processing tasks involving both the left and right hands.

19.
Sci Rep ; 14(1): 20940, 2024 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-39251780

RESUMO

Recent advancements in artificial intelligence (AI) have prompted researchers to expand into the field of oculomics; the association between the retina and systemic health. Unlike conventional AI models trained on well-recognized retinal features, the retinal phenotypes that most oculomics models use are more subtle. Consequently, applying conventional tools, such as saliency maps, to understand how oculomics models arrive at their inference is problematic and open to bias. We hypothesized that neuron activation patterns (NAPs) could be an alternative way to interpret oculomics models, but currently, most existing implementations focus on failure diagnosis. In this study, we designed a novel NAP framework to interpret an oculomics model. We then applied our framework to an AI model predicting systolic blood pressure from fundus images in the United Kingdom Biobank dataset. We found that the NAP generated from our framework was correlated to the clinically relevant endpoint of cardiovascular risk. Our NAP was also able to discern two biologically distinct groups among participants who were assigned the same predicted systolic blood pressure. These results demonstrate the feasibility of our proposed NAP framework for gaining deeper insights into the functioning of oculomics models. Further work is required to validate these results on external datasets.


Assuntos
Inteligência Artificial , Humanos , Neurônios/fisiologia , Pressão Sanguínea/fisiologia , Masculino , Feminino , Reino Unido , Retina/fisiologia , Pessoa de Meia-Idade
20.
Clin Neurophysiol ; 167: 14-25, 2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-39265288

RESUMO

OBJECTIVE: Clinical visual intraoperative electrocorticography (ioECoG) reading intends to localize epileptic tissue and improve epilepsy surgery outcome. We aimed to understand whether machine learning (ML) could complement ioECoG reading, how subgroups affected performance, and which ioECoG features were most important. METHODS: We included 91 ioECoG-guided epilepsy surgery patients with Engel 1A outcome. We allocated 71 training and 20 test set patients. We trained an extra trees classifier (ETC) with 14 spectral features to classify ioECoG channels as covering resected or non-resected tissue. We compared the ETC's performance with clinical ioECoG reading and assessed whether patient subgroups affected performance. Explainable artificial intelligence (xAI) unveiled the most important ioECoG features learnt by the ETC. RESULTS: The ETC outperformed clinical reading in five test set patients, was inferior in six, and both were inconclusive in nine. The ETC performed best in the tumor subgroup (area under ROC curve: 0.84 [95%CI 0.79-0.89]). xAI revealed predictors of resected (relative theta, alpha, and fast ripple power) and non-resected tissue (relative beta and gamma power). CONCLUSIONS: Combinations of subtle spectral ioECoG changes, imperceptible by the human eye, can aid healthy and pathological tissue discrimination. SIGNIFICANCE: ML with spectral ioECoG features can support, rather than replace, clinical ioECoG reading, particularly in tumors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA