Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38900207

RESUMO

OBJECTIVE: Although supervised machine learning is popular for information extraction from clinical notes, creating large annotated datasets requires extensive domain expertise and is time-consuming. Meanwhile, large language models (LLMs) have demonstrated promising transfer learning capability. In this study, we explored whether recent LLMs could reduce the need for large-scale data annotations. MATERIALS AND METHODS: We curated a dataset of 769 breast cancer pathology reports, manually labeled with 12 categories, to compare zero-shot classification capability of the following LLMs: GPT-4, GPT-3.5, Starling, and ClinicalCamel, with task-specific supervised classification performance of 3 models: random forests, long short-term memory networks with attention (LSTM-Att), and the UCSF-BERT model. RESULTS: Across all 12 tasks, the GPT-4 model performed either significantly better than or as well as the best supervised model, LSTM-Att (average macro F1-score of 0.86 vs 0.75), with advantage on tasks with high label imbalance. Other LLMs demonstrated poor performance. Frequent GPT-4 error categories included incorrect inferences from multiple samples and from history, and complex task design, and several LSTM-Att errors were related to poor generalization to the test set. DISCUSSION: On tasks where large annotated datasets cannot be easily collected, LLMs can reduce the burden of data labeling. However, if the use of LLMs is prohibitive, the use of simpler models with large annotated datasets can provide comparable results. CONCLUSIONS: GPT-4 demonstrated the potential to speed up the execution of clinical NLP studies by reducing the need for large annotated datasets. This may increase the utilization of NLP-based variables and outcomes in clinical studies.

2.
Res Sq ; 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38405831

RESUMO

Although supervised machine learning is popular for information extraction from clinical notes, creating large, annotated datasets requires extensive domain expertise and is time-consuming. Meanwhile, large language models (LLMs) have demonstrated promising transfer learning capability. In this study, we explored whether recent LLMs can reduce the need for large-scale data annotations. We curated a manually labeled dataset of 769 breast cancer pathology reports, labeled with 13 categories, to compare zero-shot classification capability of the GPT-4 model and the GPT-3.5 model with supervised classification performance of three model architectures: random forests classifier, long short-term memory networks with attention (LSTM-Att), and the UCSF-BERT model. Across all 13 tasks, the GPT-4 model performed either significantly better than or as well as the best supervised model, the LSTM-Att model (average macro F1 score of 0.83 vs. 0.75). On tasks with a high imbalance between labels, the differences were more prominent. Frequent sources of GPT-4 errors included inferences from multiple samples and complex task design. On complex tasks where large annotated datasets cannot be easily collected, LLMs can reduce the burden of large-scale data labeling. However, if the use of LLMs is prohibitive, the use of simpler supervised models with large annotated datasets can provide comparable results. LLMs demonstrated the potential to speed up the execution of clinical NLP studies by reducing the need for curating large annotated datasets. This may increase the utilization of NLP-based variables and outcomes in observational clinical studies.

3.
J Med Chem ; 64(19): 14513-14525, 2021 10 14.
Artigo em Inglês | MEDLINE | ID: mdl-34558909

RESUMO

Autophagy is upregulated in response to metabolic stress, a hypoxic tumor microenvironment, and therapeutic stress in various cancers and mediates tumor progression and resistance to cancer therapy. Herein, we identified a cinchona alkaloid derivative containing urea (C1), which exhibited potential cytotoxicity and inhibited autophagy in hepatocellular carcinoma (HCC) cells. We showed that C1 not only induced apoptosis but also blocked autophagy in HCC cells, as indicated by the increased expression of LC3-II and p62, inhibition of autophagosome-lysosome fusion, and suppression of the Akt/mTOR/S6k pathway in the HCC cells. Finally, to improve its solubility and efficacy, we encapsulated C1 into PEGylated lipid-poly(lactic-co-glycolic acid) (PLGA) nanoscale drug carriers. Systemic administration of nanoscale C1 significantly suppressed primary tumor growth and prevented distant metastasis while maintaining a desirable safety profile. Our findings demonstrate that C1 combines autophagy modulation and apoptosis induction in a single molecule, making it a promising therapeutic option for HCC.


Assuntos
Antineoplásicos/farmacologia , Autofagia/efeitos dos fármacos , Carcinoma Hepatocelular/patologia , Alcaloides de Cinchona/farmacologia , Neoplasias Hepáticas/patologia , Ureia/química , Apoptose/efeitos dos fármacos , Linhagem Celular Tumoral , Proliferação de Células/efeitos dos fármacos , Humanos , Microambiente Tumoral/efeitos dos fármacos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...