Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 17064, 2024 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-39048590

RESUMO

Deep learning (DL) has shown potential to provide powerful representations of bulk RNA-seq data in cancer research. However, there is no consensus regarding the impact of design choices of DL approaches on the performance of the learned representation, including the model architecture, the training methodology and the various hyperparameters. To address this problem, we evaluate the performance of various design choices of DL representation learning methods using TCGA and DepMap pan-cancer datasets and assess their predictive power for survival and gene essentiality predictions. We demonstrate that baseline methods achieve comparable or superior performance compared to more complex models on survival predictions tasks. DL representation methods, however, are the most efficient to predict the gene essentiality of cell lines. We show that auto-encoders (AE) are consistently improved by techniques such as masking and multi-head training. Our results suggest that the impact of DL representations and of pretraining are highly task- and architecture-dependent, highlighting the need for adopting rigorous evaluation guidelines. These guidelines for robust evaluation are implemented in a pipeline made available to the research community.


Assuntos
Aprendizado Profundo , Genes Essenciais , RNA-Seq , Humanos , RNA-Seq/métodos , Neoplasias/genética , Neoplasias/mortalidade , Biologia Computacional/métodos
2.
Eur J Cancer ; 174: 90-98, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35985252

RESUMO

BACKGROUND: The need for developing new biomarkers is increasing with the emergence of many targeted therapies. Artificial Intelligence (AI) algorithms have shown great promise in the medical imaging field to build predictive models. We developed a prognostic model for solid tumour patients using AI on multimodal data. PATIENTS AND METHODS: Our retrospective study included examinations of patients with seven different cancer types performed between 2003 and 2017 in 17 different hospitals. Radiologists annotated all metastases on baseline computed tomography (CT) and ultrasound (US) images. Imaging features were extracted using AI models and used along with the patients' and treatments' metadata. A Cox regression was fitted to predict prognosis. Performance was assessed on a left-out test set with 1000 bootstraps. RESULTS: The model was built on 436 patients and tested on 196 patients (mean age 59, IQR: 51-6, 411 men out of 616 patients). On the whole, 1147 US images were annotated with lesions delineation, and 632 thorax-abdomen-pelvis CTs (total of 301,975 slices) were fully annotated with a total of 9516 lesions. The developed model reaches an average concordance index of 0.71 (0.67-0.76, 95% CI). Using the median predicted risk as a threshold value, the model is able to significantly (log-rank test P value < 0.001) isolate high-risk patients from low-risk patients (respective median OS of 11 and 31 months) with a hazard ratio of 3.5 (2.4-5.2, 95% CI). CONCLUSION: AI was able to extract prognostic features from imaging data, and along with clinical data, allows an accurate stratification of patients' prognoses.


Assuntos
Inteligência Artificial , Neoplasias , Biomarcadores , Humanos , Masculino , Pessoa de Meia-Idade , Neoplasias/diagnóstico por imagem , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...