Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Comput Biol Med ; 177: 108614, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38796884

ABSTRACT

Integration analysis of cancer multi-omics data for pan-cancer classification has the potential for clinical applications in various aspects such as tumor diagnosis, analyzing clinically significant features, and providing precision medicine. In these applications, the embedding and feature selection on high-dimensional multi-omics data is clinically necessary. Recently, deep learning algorithms become the most promising cancer multi-omic integration analysis methods, due to the powerful capability of capturing nonlinear relationships. Developing effective deep learning architectures for cancer multi-omics embedding and feature selection remains a challenge for researchers in view of high dimensionality and heterogeneity. In this paper, we propose a novel two-phase deep learning model named AVBAE-MODFR for pan-cancer classification. AVBAE-MODFR achieves embedding by a multi2multi autoencoder based on the adversarial variational Bayes method and further performs feature selection utilizing a dual-net-based feature ranking method. AVBAE-MODFR utilizes AVBAE to pre-train the network parameters, which improves the classification performance and enhances feature ranking stability in MODFR. Firstly, AVBAE learns high-quality representation among multiple omics features for unsupervised pan-cancer classification. We design an efficient discriminator architecture to distinguish the latent distributions for updating forward variational parameters. Secondly, we propose MODFR to simultaneously evaluate multi-omics feature importance for feature selection by training a designed multi2one selector network, where the efficient evaluation approach based on the average gradient of random mask subsets can avoid bias caused by input feature drift. We conduct experiments on the TCGA pan-cancer dataset and compare it with four state-of-the-art methods for each phase. The results show the superiority of AVBAE-MODFR over SOTA methods.


Subject(s)
Deep Learning , Neoplasms , Humans , Neoplasms/classification , Neoplasms/metabolism , Neoplasms/genetics , Algorithms , Genomics , Multiomics
2.
Comput Biol Med ; 166: 107531, 2023 Oct 04.
Article in English | MEDLINE | ID: mdl-37806056

ABSTRACT

Medical images with different modalities have different semantic characteristics. Medical image fusion aiming to promotion of the visual quality and practical value has become important in medical diagnostics. However, the previous methods do not fully represent semantic and visual features, and the model generalization ability needs to be improved. Furthermore, the brightness-stacking phenomenon is easy to occur during the fusion process. In this paper, we propose an asymmetric dual deep network with sharing mechanism (ADDNS) for medical image fusion. In our asymmetric model-level dual framework, primal Unet part learns to fuse medical images of different modality into a fusion image, while dual Unet part learns to invert the fusion task for multi-modal image reconstruction. This asymmetry of network settings not only enables the ADDNS to fully extract semantic and visual features, but also reduces the model complexity and accelerates the convergence. Furthermore, the sharing mechanism designed according to task relevance also reduces the model complexity and improves the generalization ability of our model. In the end, we use the intermediate supervision method to minimize the difference between fusion image and source images so as to prevent the brightness-stacking problem. Experimental results show that our algorithm achieves better results on both quantitative and qualitative experiments than several state-of-the-art methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...