Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Leuk Lymphoma ; : 1-9, 2024 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-39129334

RESUMEN

This study reports characteristics and outcomes of adults who received Azacitidine-Venetoclax (AZA-VEN) compared to other salvage therapies (NO-AZA-VEN) as first salvage therapy for acute myeloid leukemia (AML). The clinical data of 81 patients with a diagnosis of relapsed or refractory AML were analyzed. The ORR was comparable for both groups (55% vs 57%, p = 0.852). Median OS (6.8 vs 11.2 months, p = 0.053) and median RFS (6.9 vs 11.2 months, p = 0.488) showed a trend in favor of the NO-AZA-VEN group. OS was significantly longer with NO-AZA-VEN for ELN 2022 risk category sub-group, patients under 60 years old, primary AML and for patients who underwent allo-hematopoietic stem cell transplant after salvage therapy. There was no statistical difference in complications of treatment such as febrile neutropenia, intensive care unit stay, septic shock and total parenteral nutrition. Those results do not support the preferential use of AZA-VEN over other regimens in R/R acute myeloid leukemia.

2.
Neural Netw ; 164: 382-394, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37167751

RESUMEN

We prove new generalization bounds for stochastic gradient descent when training classifiers with invariances. Our analysis is based on the stability framework and covers both the convex case of linear classifiers and the non-convex case of homogeneous neural networks. We analyze stability with respect to the normalized version of the loss function used for training. This leads to investigating a form of angle-wise stability instead of euclidean stability in weights. For neural networks, the measure of distance we consider is invariant to rescaling the weights of each layer. Furthermore, we exploit the notion of on-average stability in order to obtain a data-dependent quantity in the bound. This data-dependent quantity is seen to be more favorable when training with larger learning rates in our numerical experiments. This might help to shed some light on why larger learning rates can lead to better generalization in some practical scenarios.


Asunto(s)
Aprendizaje , Redes Neurales de la Computación , Generalización Psicológica
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA