Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
iScience ; 27(6): 109838, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38799555

RESUMO

With the continuous integration and development of AI and natural sciences, we have been diligently exploring a computational analysis framework for digital photonic devices. Here, We have overcome the challenge of limited datasets through the use of Generative Adversarial Network networks and transfer learning, providing AI feedback that aligns with human knowledge systems. Furthermore, we have introduced knowledge from disciplines such as image denoising, multi-agent modeling of Physarum polycephalum, percolation theory, wave function collapse algorithms, and others to analyze this new design system. It represents an accomplishment unattainable within the framework of classical photonics theory and significantly improves the performance of the designed devices. Notably, we present theoretical analyses for the drastic changes in device performance and the enhancement of device robustness, which have not been reported in previous research. The proposed concept of meta-photonics transcends the conventional boundaries of disciplinary silos, demonstrating the transformative potential of interdisciplinary fusion.

2.
Genome Med ; 15(1): 93, 2023 11 08.
Artigo em Inglês | MEDLINE | ID: mdl-37936230

RESUMO

BACKGROUND: Early detection of hepatocellular carcinoma (HCC) is important in order to improve patient prognosis and survival rate. Methylation sequencing combined with neural networks to identify cell-free DNA (cfDNA) carrying aberrant methylation offers an appealing and non-invasive approach for HCC detection. However, some limitations exist in traditional methylation detection technologies and models, which may impede their performance in the read-level detection of HCC. METHODS: We developed a low DNA damage and high-fidelity methylation detection method called No End-repair Enzymatic Methyl-seq (NEEM-seq). We further developed a read-level neural detection model called DeepTrace that can better identify HCC-derived sequencing reads through a pre-trained and fine-tuned neural network. After pre-training on 11 million reads from NEEM-seq, DeepTrace was fine-tuned using 1.2 million HCC-derived reads from tumor tissue DNA after noise reduction, and 2.7 million non-tumor reads from non-tumor cfDNA. We validated the model using data from 130 individuals with cfDNA whole-genome NEEM-seq at around 1.6X depth. RESULTS: NEEM-seq overcomes the drawbacks of traditional enzymatic methylation sequencing methods by avoiding the introduction of unmethylation errors in cfDNA. DeepTrace outperformed other models in identifying HCC-derived reads and detecting HCC individuals. Based on the whole-genome NEEM-seq data of cfDNA, our model showed high accuracy of 96.2%, sensitivity of 93.6%, and specificity of 98.5% in the validation cohort consisting of 62 HCC patients, 48 liver disease patients, and 20 healthy individuals. In the early stage of HCC (BCLC 0/A and TNM I), the sensitivity of DeepTrace was 89.6 and 89.5% respectively, outperforming Alpha Fetoprotein (AFP) which showed much lower sensitivity in both BCLC 0/A (50.5%) and TNM I (44.7%). CONCLUSIONS: By combining high-fidelity methylation data from NEEM-seq with the DeepTrace model, our method has great potential for HCC early detection with high sensitivity and specificity, making it potentially suitable for clinical applications. DeepTrace: https://github.com/Bamrock/DeepTrace.


Assuntos
Carcinoma Hepatocelular , Ácidos Nucleicos Livres , Neoplasias Hepáticas , Humanos , Carcinoma Hepatocelular/diagnóstico , Carcinoma Hepatocelular/genética , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/genética , Ácidos Nucleicos Livres/genética , Biomarcadores Tumorais/genética , DNA de Neoplasias , Metilação de DNA , Redes Neurais de Computação
3.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4688-4698, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-31902775

RESUMO

Neural machine translation (NMT) heavily relies on context vectors generated by an attention network to predict target words. In practice, we observe that the context vectors for different target words are quite similar to one another and translations with such nondiscriminatory context vectors tend to be degenerative. We ascribe this similarity to the invariant source representations that lack dynamics across decoding steps. In this article, we propose a novel gated recurrent unit (GRU)-gated attention model (GAtt) for NMT. By updating the source representations with the previous decoder state via a GRU, GAtt enables translation-sensitive source representations that then contribute to discriminative context vectors. We further propose a variant of GAtt by swapping the input order of the source representations and the previous decoder state to the GRU. Experiments on the NIST Chinese-English, WMT14 English-German, and WMT17 English-German translation tasks show that the two GAtt models achieve significant improvements over the vanilla attention-based NMT. Further analyses on the attention weights and context vectors demonstrate the effectiveness of GAtt in enhancing the discriminating capacity of representations and handling the challenging issue of overtranslation.

4.
IEEE Trans Cybern ; 50(2): 503-513, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-30273171

RESUMO

Exploiting semantic interactions between the source and target linguistic items at different levels of granularity is crucial for generating compact vector representations for bilingual phrases. To achieve this, we propose alignment-supervised bidimensional attention-based recursive autoencoders (ABattRAE) in this paper. ABattRAE first individually employs two recursive autoencoders to recover hierarchical tree structures of bilingual phrase, and treats the subphrase covered by each node on the tree as a linguistic item. Unlike previous methods, ABattRAE introduces a bidimensional attention network to measure the semantic matching degree between linguistic items of different languages, which enables our model to integrate information from all nodes by dynamically assigning varying weights to their corresponding embeddings. To ensure the accuracy of the generated attention weights in the attention network, ABattRAE incorporates word alignments as supervision signals to guide the learning procedure. Using the general stochastic gradient descent algorithm, we train our model in an end-to-end fashion, where the semantic similarity of translation equivalents is maximized while the semantic similarity of nontranslation pairs is minimized. Finally, we incorporate a semantic feature based on the learned bilingual phrase representations into a machine translation system for better translation selection. Experimental results on NIST Chinese-English and WMT English-German test sets show that our model achieves substantial improvements of up to 2.86 and 1.09 BLEU points over the baseline, respectively. Extensive in-depth analyses demonstrate the superiority of our model in learning bilingual phrase embeddings.

5.
IEEE Trans Pattern Anal Mach Intell ; 42(1): 154-163, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-30334781

RESUMO

Deepening neural models has been proven very successful in improving the model's capacity when solving complex learning tasks, such as the machine translation task. Previous efforts on deep neural machine translation mainly focus on the encoder and the decoder, while little on the attention mechanism. However, the attention mechanism is of vital importance to induce the translation correspondence between different languages where shallow neural networks are relatively insufficient, especially when the encoder and decoder are deep. In this paper, we propose a deep attention model (DeepAtt). Based on the low-level attention information, DeepAtt is capable of automatically determining what should be passed or suppressed from the corresponding encoder layer so as to make the distributed representation appropriate for high-level attention and translation. We conduct experiments on NIST Chinese-English, WMT English-German, and WMT English-French translation tasks, where, with five attention layers, DeepAtt yields very competitive performance against the state-of-the-art results. We empirically find that with an adequate increase of attention layers, DeepAtt tends to produce more accurate attention weights. An in-depth analysis on the translation of important context words further reveals that DeepAtt significantly improves the faithfulness of system translations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...