Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; PP2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38875087

RESUMO

Foundation models pretrained on large-scale datasets via self-supervised learning demonstrate exceptional versatility across various tasks. Due to the heterogeneity and hard-to-collect medical data, this approach is especially beneficial for medical image analysis and neuroscience research, as it streamlines broad downstream tasks without the need for numerous costly annotations. However, there has been limited investigation into brain network foundation models, limiting their adaptability and generalizability for broad neuroscience studies. In this study, we aim to bridge this gap. In particular, (1) we curated a comprehensive dataset by collating images from 30 datasets, which comprises 70,781 samples of 46,686 participants. Moreover, we introduce pseudo-functional connectivity (pFC) to further generates millions of augmented brain networks by randomly dropping certain timepoints of the BOLD signal. (2) We propose the BrainMass framework for brain network self-supervised learning via mask modeling and feature alignment. BrainMass employs Mask-ROI Modeling (MRM) to bolster intra-network dependencies and regional specificity. Furthermore, Latent Representation Alignment (LRA) module is utilized to regularize augmented brain networks of the same participant with similar topological properties to yield similar latent representations by aligning their latent embeddings. Extensive experiments on eight internal tasks and seven external brain disorder diagnosis tasks show BrainMass's superior performance, highlighting its significant generalizability and adaptability. Nonetheless, BrainMass demonstrates powerful few/zero-shot learning abilities and exhibits meaningful interpretation to various diseases, showcasing its potential use for clinical applications.

2.
IEEE Trans Pattern Anal Mach Intell ; 44(10): 6752-6766, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34310290

RESUMO

Generative Adversarial Networks (GANs) are formulated as minimax game problems that generative networks attempt to approach real data distributions by adversarial learning against discriminators which learn to distinguish generated samples from real ones, of which the intrinsic problem complexity poses challenges to performance and robustness. In this work, we aim to boost model learning from the perspective of network architectures, by incorporating recent progress on automated architecture search into GANs. Specially we propose a fully differentiable search framework, dubbed alphaGAN, where the searching process is formalized as solving a bi-level minimax optimization problem. The outer-level objective aims for seeking an optimal network architecture towards pure Nash Equilibrium conditioned on the network parameters of generators and discriminators optimized with a traditional adversarial loss within inner level. The entire optimization performs a first-order approach by alternately minimizing the two-level objective in a fully differentiable manner that enables obtaining a suitable architecture efficiently from an enormous search space. Extensive experiments on CIFAR-10 and STL-10 datasets show that our algorithm can obtain high-performing architectures only with 3-GPU hours on a single GPU in the search space comprised of approximate 2×1011 possible configurations. We further validate the method on the state-of-the-art network StyleGAN2, and push the score of Fréchet Inception Distance (FID) further, i.e., achieving 1.94 on CelebA, 2.86 on LSUN-church and 2.75 on FFHQ, with relative improvements 3% âˆ¼  26% over the baseline architecture. We also provide a comprehensive analysis of the behavior of the searching process and the properties of searched architectures, which would benefit further research on architectures for generative models. Codes and models are available at https://github.com/yuesongtian/AlphaGAN.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...