Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neurosci Methods ; 368: 109475, 2022 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-34995648

RESUMO

BACKGROUND: Predicting the evolution of the brain network, also called connectome, by foreseeing changes in the connectivity weights linking pairs of anatomical regions makes it possible to spot connectivity-related neurological disorders in earlier stages and detect the development of potential connectomic anomalies. Remarkably, such a challenging prediction problem remains least explored in the predictive connectomics literature. It is a known fact that machine learning (ML) methods have proven their predictive abilities in a wide variety of computer vision problems. However, ML techniques specifically tailored for the prediction of brain connectivity evolution trajectory from a single timepoint are almost absent. NEW METHOD: To fill this gap, we organized a Kaggle competition where 20 competing teams designed advanced machine learning pipelines for predicting the brain connectivity evolution from a single timepoint. The teams developed their ML pipelines with combination of data pre-processing, dimensionality reduction and learning methods. Each ML framework inputs a baseline brain connectivity matrix observed at baseline timepoint t0 and outputs the brain connectivity map at a follow-up timepoint t1. The longitudinal OASIS-2 dataset was used for model training and evaluation. Both random data split and 5-fold cross-validation strategies were used for ranking and evaluating the generalizability and scalability of each competing ML pipeline. RESULTS: Utilizing an inclusive approach, we ranked the methods based on two complementary evaluation metrics (mean absolute error (MAE) and Pearson Correlation Coefficient (PCC)) and their performances using different training and testing data perturbation strategies (single random split and cross-validation). The final rank was calculated using the rank product for each competing team across all evaluation measures and validation strategies. Furthermore, we added statistical significance values to each proposed pipeline. CONCLUSION: In support of open science, the developed 20 ML pipelines along with the connectomic dataset are made available on GitHub (https://github.com/basiralab/Kaggle-BrainNetPrediction-Toolbox). The outcomes of this competition are anticipated to lead the further development of predictive models that can foresee the evolution of the brain connectivity over time, as well as other types of networks (e.g., genetic networks).


Assuntos
Conectoma , Aprendizado de Máquina , Encéfalo/diagnóstico por imagem
2.
IEEE Trans Neural Netw Learn Syst ; 33(6): 2313-2323, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34874873

RESUMO

Anomalies are ubiquitous in all scientific fields and can express an unexpected event due to incomplete knowledge about the data distribution or an unknown process that suddenly comes into play and distorts the observations. Usually, due to such events' rarity, to train deep learning (DL) models on the anomaly detection (AD) task, scientists only rely on "normal" data, i.e., nonanomalous samples. Thus, letting the neural network infer the distribution beneath the input data. In such a context, we propose a novel framework, named multilayer one-class classification (MOCCA), to train and test DL models on the AD task. Specifically, we applied our approach to autoencoders. A key novelty in our work stems from the explicit optimization of the intermediate representations for the task at hand. Indeed, differently from commonly used approaches that consider a neural network as a single computational block, i.e., using the output of the last layer only, MOCCA explicitly leverages the multilayer structure of deep architectures. Each layer's feature space is optimized for AD during training, while in the test phase, the deep representations extracted from the trained layers are combined to detect anomalies. With MOCCA, we split the training process into two steps. First, the autoencoder is trained on the reconstruction task only. Then, we only retain the encoder tasked with minimizing the L2 distance between the output representation and a reference point, the anomaly-free training data centroid, at each considered layer. Subsequently, we combine the deep features extracted at the various trained layers of the encoder model to detect anomalies at inference time. To assess the performance of the models trained with MOCCA, we conduct extensive experiments on publicly available datasets, namely CIFAR10, MVTec AD, and ShanghaiTech. We show that our proposed method reaches comparable or superior performance to state-of-the-art approaches available in the literature. Finally, we provide a model analysis to give insights regarding the benefits of our training procedure.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...