Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 6779, 2024 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-38514696

RESUMO

The heterogeneous pathogenesis and treatment response of non-small cell lung cancer (NSCLC) has led clinical treatment decisions to be guided by NSCLC subtypes, with lung adenocarcinoma and lung squamous cell carcinoma being the most common subtypes. While histology-based subtyping remains challenging, NSCLC subtypes were found to be distinct at the transcriptomic level. However, unlike genomic alterations, gene expression is generally not assessed in clinical routine. Since subtyping of NSCLC has remained elusive using mutational data, we aimed at developing a neural network model that simultaneously learns from adenocarcinoma and squamous cell carcinoma samples of other tissue types and is regularized using a neural network model trained from gene expression data. While substructures of the expression-based manifold were captured in the mutation-based manifold, NSCLC classification accuracy did not significantly improve. However, performance was increased when rejecting inconclusive samples using an ensemble-based approach capturing prediction uncertainty. Importantly, SHAP analysis of misclassified samples identified co-occurring mutations indicative of both NSCLC subtypes, questioning the current NSCLC subtype classification to adequately represent inherent mutational heterogeneity. Since our model captures mutational patterns linked to clinical heterogeneity, we anticipate it to be suited as foundational model of genomic data for clinically relevant prognostic or predictive downstream tasks.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Carcinoma de Células Escamosas , Neoplasias Pulmonares , Humanos , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/patologia , Incerteza , Carcinoma de Células Escamosas/patologia , Mutação
2.
Front Neuroinform ; 16: 844667, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35620278

RESUMO

Biometrics is the process of measuring and analyzing human characteristics to verify a given person's identity. Most real-world applications rely on unique human traits such as fingerprints or iris. However, among these unique human characteristics for biometrics, the use of Electroencephalogram (EEG) stands out given its high inter-subject variability. Recent advances in Deep Learning and a deeper understanding of EEG processing methods have led to the development of models that accurately discriminate unique individuals. However, it is still uncertain how much EEG data is required to train such models. This work aims at determining the minimal amount of training data required to develop a robust EEG-based biometric model (+95% and +99% testing accuracies) from a subject for a task-dependent task. This goal is achieved by performing and analyzing 11,780 combinations of training sizes, by employing various neural network-based learning techniques of increasing complexity, and feature extraction methods on the affective EEG-based DEAP dataset. Findings suggest that if Power Spectral Density or Wavelet Energy features are extracted from the artifact-free EEG signal, 1 and 3 s of data per subject is enough to achieve +95% and +99% accuracy, respectively. These findings contributes to the body of knowledge by paving a way for the application of EEG to real-world ecological biometric applications and by demonstrating methods to learn the minimal amount of data required for such applications.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...