Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Cancers (Basel) ; 15(9)2023 Apr 25.
Article in English | MEDLINE | ID: mdl-37173930

ABSTRACT

Despite the unprecedented performance of deep neural networks (DNNs) in computer vision, their clinical application in the diagnosis and prognosis of cancer using medical imaging has been limited. One of the critical challenges for integrating diagnostic DNNs into radiological and oncological applications is their lack of interpretability, preventing clinicians from understanding the model predictions. Therefore, we studied and propose the integration of expert-derived radiomics and DNN-predicted biomarkers in interpretable classifiers, which we refer to as ConRad, for computerized tomography (CT) scans of lung cancer. Importantly, the tumor biomarkers can be predicted from a concept bottleneck model (CBM) such that once trained, our ConRad models do not require labor-intensive and time-consuming biomarkers. In our evaluation and practical application, the only input to ConRad is a segmented CT scan. The proposed model was compared to convolutional neural networks (CNNs) which act as a black box classifier. We further investigated and evaluated all combinations of radiomics, predicted biomarkers and CNN features in five different classifiers. We found the ConRad models using nonlinear SVM and the logistic regression with the Lasso outperformed the others in five-fold cross-validation, with the interpretability of ConRad being its primary advantage. The Lasso is used for feature selection, which substantially reduces the number of nonzero weights while increasing the accuracy. Overall, the proposed ConRad model combines CBM-derived biomarkers and radiomics features in an interpretable ML model which demonstrates excellent performance for lung nodule malignancy classification.

2.
Phys Med ; 83: 108-121, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33765601

ABSTRACT

Over the last decade there has been an extensive evolution in the Artificial Intelligence (AI) field. Modern radiation oncology is based on the exploitation of advanced computational methods aiming to personalization and high diagnostic and therapeutic precision. The quantity of the available imaging data and the increased developments of Machine Learning (ML), particularly Deep Learning (DL), triggered the research on uncovering "hidden" biomarkers and quantitative features from anatomical and functional medical images. Deep Neural Networks (DNN) have achieved outstanding performance and broad implementation in image processing tasks. Lately, DNNs have been considered for radiomics and their potentials for explainable AI (XAI) may help classification and prediction in clinical practice. However, most of them are using limited datasets and lack generalized applicability. In this study we review the basics of radiomics feature extraction, DNNs in image analysis, and major interpretability methods that help enable explainable AI. Furthermore, we discuss the crucial requirement of multicenter recruitment of large datasets, increasing the biomarkers variability, so as to establish the potential clinical value of radiomics and the development of robust explainable AI models.


Subject(s)
Artificial Intelligence , Deep Learning , Image Processing, Computer-Assisted , Machine Learning , Multicenter Studies as Topic , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL
...