Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
RSC Med Chem ; 15(5): 1547-1555, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38784468

ABSTRACT

Most machine learning (ML) methods produce predictions that are hard or impossible to understand. The black box nature of predictive models obscures potential learning bias and makes it difficult to recognize and trace problems. Moreover, the inability to rationalize model decisions causes reluctance to accept predictions for experimental design. For ML, limited trust in predictions presents a substantial problem and continues to limit its impact in interdisciplinary research, including early-phase drug discovery. As a desirable remedy, approaches from explainable artificial intelligence (XAI) are increasingly applied to shed light on the ML black box and help to rationalize predictions. Among these is the concept of counterfactuals (CFs), which are best understood as test cases with small modifications yielding opposing prediction outcomes (such as different class labels in object classification). For ML applications in medicinal chemistry, for example, compound activity predictions, CFs are particularly intuitive because these hypothetical molecules enable immediate comparisons with actual test compounds that do not require expert ML knowledge and are accessible to practicing chemists. Such comparisons often reveal structural moieties in compounds that determine their predictions and can be further investigated. Herein, we adapt and extend a recently introduced concept for the systematic generation of molecular CFs to multi-task predictions of different classes of protein kinase inhibitors, analyze CFs in detail, rationalize the origins of CF formation in multi-task modeling, and present exemplary explanations of predictions.

2.
ChemMedChem ; 19(3): e202300586, 2024 02 01.
Article in English | MEDLINE | ID: mdl-37983655

ABSTRACT

The use of black box machine learning models whose decisions cannot be understood limits the acceptance of predictions in interdisciplinary research and camouflages artificial learning characteristics leading to predictions for other than anticipated reasons. Consequently, there is increasing interest in explainable artificial intelligence to rationalize predictions and uncover potential pitfalls. Among others, relevant approaches include feature attribution methods to identify molecular structures determining predictions and counterfactuals (CFs) or contrastive explanations. CFs are defined as variants of test instances with minimal modifications leading to opposing predictions. In medicinal chemistry, CFs have thus far only been little investigated although they are particularly intuitive from a chemical perspective. We introduce a new methodology for the systematic generation of CFs that is centered on well-defined structural analogues of test compounds. The approach is transparent, computationally straightforward, and shown to provide a wealth of CFs for test sets. The method is made freely available.


Subject(s)
Artificial Intelligence , Machine Learning , Chemistry, Pharmaceutical , Recombination, Genetic
3.
Molecules ; 28(14)2023 Jul 24.
Article in English | MEDLINE | ID: mdl-37513472

ABSTRACT

Most machine learning (ML) models produce black box predictions that are difficult, if not impossible, to understand. In pharmaceutical research, black box predictions work against the acceptance of ML models for guiding experimental work. Hence, there is increasing interest in approaches for explainable ML, which is a part of explainable artificial intelligence (XAI), to better understand prediction outcomes. Herein, we have devised a test system for the rationalization of multiclass compound activity prediction models that combines two approaches from XAI for feature relevance or importance analysis, including counterfactuals (CFs) and Shapley additive explanations (SHAP). For compounds with different single- and dual-target activities, we identified small compound modifications that induce feature changes inverting class label predictions. In combination with feature mapping, CFs and SHAP value calculations provide chemically intuitive explanations for model decisions.

4.
Molecules ; 28(2)2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36677879

ABSTRACT

In drug discovery, compounds with well-defined activity against multiple targets (multitarget compounds, MT-CPDs) provide the basis for polypharmacology and are thus of high interest. Typically, MT-CPDs for polypharmacology have been discovered serendipitously. Therefore, over the past decade, computational approaches have also been adapted for the design of MT-CPDs or their identification via computational screening. Such approaches continue to be under development and are far from being routine. Recently, different machine learning (ML) models have been derived to distinguish between MT-CPDs and corresponding compounds with activity against the individual targets (single-target compounds, ST-CPDs). When evaluating alternative models for predicting MT-CPDs, we discovered that MT-CPDs could also be accurately predicted with models derived for corresponding ST-CPDs; this was an unexpected finding that we further investigated using explainable ML. The analysis revealed that accurate predictions of ST-CPDs were determined by subsets of structural features of MT-CPDs required for their prediction. These findings provided a chemically intuitive rationale for the successful prediction of MT-CPDs using different ML models and uncovered general-feature subset relationships between MT- and ST-CPDs with activities against different targets.


Subject(s)
Drug Discovery , Machine Learning , Polypharmacology
SELECTION OF CITATIONS
SEARCH DETAIL
...