Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38889025

RESUMO

In the field of drug discovery, a proliferation of pre-trained models has surfaced, exhibiting exceptional performance across a variety of tasks. However, the extensive size of these models, coupled with the limited interpretative capabilities of current fine-tuning methods, impedes the integration of pre-trained models into the drug discovery process. This paper pushes the boundaries of pre-trained models in drug discovery by designing a novel fine-tuning paradigm known as the Head Feature Parallel Adapter (HFPA), which is highly interpretable, high-performing, and has fewer parameters than other widely used methods. Specifically, this approach enables the model to consider diverse information across representation subspaces concurrently by strategically using Adapters, which can operate directly within the model's feature space. Our tactic freezes the backbone model and forces various small-size Adapters' corresponding subspaces to focus on exploring different atomic and chemical bond knowledge, thus maintaining a small number of trainable parameters and enhancing the interpretability of the model. Moreover, we furnish a comprehensive interpretability analysis, imparting valuable insights into the chemical area. HFPA outperforms over seven physiology and toxicity tasks and achieves state-of-the-art results in three physical chemistry tasks. We also test ten additional molecular datasets, demonstrating the robustness and broad applicability of HFPA.

2.
IEEE Trans Med Imaging ; PP2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38941198

RESUMO

Whole Slide Images (WSIs) are paramount in the medical field, with extensive applications in disease diagnosis and treatment. Recently, many deep-learning methods have been used to classify WSIs. However, these methods are inadequate for accurately analyzing WSIs as they treat regions in WSIs as isolated entities and ignore contextual information. To address this challenge, we propose a novel Dual-Granularity Cooperative Diffusion Model (DCDiff) for the precise classification of WSIs. Specifically, we first design a cooperative forward and reverse diffusion strategy, utilizing fine-granularity and coarse-granularity to regulate each diffusion step and gradually improve context awareness. To exchange information between granularities, we propose a coupled U-Net for dual-granularity denoising, which efficiently integrates dual-granularity consistency information using the designed Fine- and Coarse-granularity Cooperative Aware (FCCA) model. Ultimately, the cooperative diffusion features extracted by DCDiff can achieve cross-sample perception from the reconstructed distribution of training samples. Experiments on three public WSI datasets show that the proposed method can achieve superior performance over state-of-the-art methods. The code is available at https://github.com/hemo0826/DCDiff.

3.
J Cancer Res Clin Oncol ; 149(11): 9229-9241, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37199837

RESUMO

PURPOSE: Breast cancer patients typically have decent prognoses, with a 5-year survival rate of more than 90%, but when the disease metastases to lymph node or distant, the prognosis drastically declines. Therefore, it is essential for future treatment and patient survival to quickly and accurately identify tumor metastasis in patients. An artificial intelligence system was developed to recognize lymph node and distant tumor metastases on whole-slide images (WSIs) of primary breast cancer. METHODS: In this study, a total of 832 WSIs from 520 patients without tumor metastases and 312 patients with breast cancer metastases (including lymph node, bone, lung, liver, and other) were gathered. Based on the WSIs were randomly divided into the training and testing cohorts, a brand-new artificial intelligence system called MEAI was built to identify lymph node and distant metastases in primary breast cancer. RESULTS: The final AI system attained an area under the receiver operating characteristic curve of 0.934 in a test set of 187 patients. In addition, the potential for AI system to increase the precision, consistency, and effectiveness of tumor metastasis detection in patients with breast cancer was highlighted by the AI's achievement of an AUROC higher than the average of six board-certified pathologists (AUROC 0.811) in a retrospective pathologist evaluation. CONCLUSION: The proposed MEAI system can provide a non-invasive approach to assess the metastatic probability of patients with primary breast cancer.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Metástase Linfática/patologia , Neoplasias da Mama/patologia , Inteligência Artificial , Estudos Retrospectivos , Linfonodos/patologia , Compostos Radiofarmacêuticos
4.
Med Image Anal ; 82: 102572, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36055051

RESUMO

Automatically and accurately annotating tumor in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), which provides a noninvasive in vivo method to evaluate tumor vasculature architectures based on contrast accumulation and washout, is a crucial step in computer-aided breast cancer diagnosis and treatment. However, it remains challenging due to the varying sizes, shapes, appearances and densities of tumors caused by the high heterogeneity of breast cancer, and the high dimensionality and ill-posed artifacts of DCE-MRI. In this paper, we propose a hybrid hemodynamic knowledge-powered and feature reconstruction-guided scheme that integrates pharmacokinetics prior and feature refinement to generate sufficiently adequate features in DCE-MRI for breast cancer segmentation. The pharmacokinetics prior expressed by time intensity curve (TIC) is incorporated into the scheme through objective function called dynamic contrast-enhanced prior (DCP) loss. It contains contrast agent kinetic heterogeneity prior knowledge, which is important to optimize our model parameters. Besides, we design a spatial fusion module (SFM) embedded in the scheme to exploit intra-slices spatial structural correlations, and deploy a spatial-kinetic fusion module (SKFM) to effectively leverage the complementary information extracted from spatial-kinetic space. Furthermore, considering that low spatial resolution often leads to poor image quality in DCE-MRI, we integrate a reconstruction autoencoder into the scheme to refine feature maps in an unsupervised manner. We conduct extensive experiments to validate the proposed method and show that our approach can outperform recent state-of-the-art segmentation methods on breast cancer DCE-MRI dataset. Moreover, to explore the generalization for other segmentation tasks on dynamic imaging, we also extend the proposed method to brain segmentation in DSC-MRI sequence. Our source code will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/DCEDuDoFNet.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Meios de Contraste , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Reprodutibilidade dos Testes , Imageamento por Ressonância Magnética/métodos , Hemodinâmica
5.
IEEE J Biomed Health Inform ; 26(12): 5870-5882, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36074872

RESUMO

Chest X-ray (CXR) is commonly performed as an initial investigation in COVID-19, whose fast and accurate diagnosis is critical. Recently, deep learning has a great potential in detecting people who are suspected to be infected with COVID-19. However, deep learning resulting with black-box models, which often breaks down when forced to make predictions about data for which limited supervised information is available and lack inter-pretability, still is a major barrier for clinical integration. In this work, we hereby propose a semantic-powered explainable model-free few-shot learning scheme to quickly and precisely diagnose COVID-19 with higher reliability and transparency. Specifically, we design a Report Image Explanation Cell (RIEC) to exploit clinically indicators derived from radiology reports as interpretable driver to introduce prior knowledge at training. Meanwhile, multi-task collaborative diagnosis strategy (MCDS) is developed to construct N-way K-shot tasks, which adopts a cyclic and collaborative training approach for producing better generalization performance on new tasks. Extensive experiments demonstrate that the proposed scheme achieves competitive results (accuracy of 98.91%, precision of 98.95%, recall of 97.94% and F1-score of 98.57%) to diagnose COVID-19 and other pneumonia infected categories, even with only 200 paired CXR images and radiology reports for training. Furthermore, statistical results of comparative experiments show that our scheme provides an interpretable window into the COVID-19 diagnosis to improve the performance of the small sample size, the reliability and transparency of black-box deep learning models. Our source codes will be released on https://github.com/AI-medical-diagnosis-team-of-JNU/SPEMFSL-Diagnosis-COVID-19.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , COVID-19/diagnóstico por imagem , SARS-CoV-2 , Redes Neurais de Computação , Teste para COVID-19 , Reprodutibilidade dos Testes , Semântica , Raios X , Radiografia Torácica/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...