Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Añadir filtros








Intervalo de año
1.
Chinese Journal of Radiology ; (12): 166-172, 2023.
Artículo en Chino | WPRIM | ID: wpr-992949

RESUMEN

Objective:To explore the value of deep learning technology based on mammography in differentiating for breast imaging reporting and data system (BI-RADS) category 3 and 4 lesions.Methods:The clinical and imaging data of 305 patients with 314 lesions assessed as BI-RADS category 3 and 4 by mammography were analyzed retrospectively in Shenzhen People′s Hospital and Shenzhen Luohu People′s Hospital from January to December 2020. All 305 patients were female, aged 21 to 83 (47±12) years. Two general radiologists (general radiologist A and general radiologist B) with 5 and 6 years of work experience and two professional breast imaging diagnostic radiologists (professional radiologist A and professional radiologist B) with 21 years of work experience and specialized breast imaging training were randomly assigned to read the imaging independently at a 1∶1 ratio, and then to read the imaging again in combination with the deep learning system. Finally, breast lesions were reclassified into BI-RADS category 3 or 4. The receiver operating characteristic curve and area under the curve (AUC) were used to evaluate the diagnostic performance, and the differences of AUCs were compared by DeLong method.Results:The AUC of general radiologist A combined with deep learning system to reclassify BI-RADS category 3 and 4 breast lesions was significantly higher than that of general radiologist A alone (AUC=0.79, 0.63, Z=2.82, P=0.005, respectively). The AUC of general radiologist B combined with deep learning system to reclassify BI-RADS category 3 and 4 breast lesions was significantly higher than that of general radiologist B (AUC=0.83, 0.64, Z=3.32, P=0.001, respectively). There was no significant difference in the AUCs between professional radiologist A combined with deep learning system and professional radiologist A, and professional radiologist B combined with deep learning system and professional radiologist B in reclassifying BI-RADS category 3 and 4 breast lesions ( P>0.05). Conclusion:The deep learning system based on mammography is more effective in assisting general radiologists to differentiate between BI-RADS category 3 and 4 lesions.

2.
Chinese Journal of Radiology ; (12): 1215-1222, 2022.
Artículo en Chino | WPRIM | ID: wpr-956778

RESUMEN

Objective:To establish the predictive models for the prognosis of ductal carcinoma in situ (DCIS) at different pathological stages, and to evaluate the predictive performance of the models.Methods:Complete data of 273 patients with confirmed DCIS at different pathological stages who underwent mammography examination in Shenzhen People′s Hospital, Peking University Shenzhen Hospital and Shenzhen Luohu People′s Hospital from November 2014 to December 2020 were retrospectively collected, including 110 cases in the DCIS+ductal carcinoma in situ with microinvasion (DCIS-MI) group and 163 cases in the invasive ductal carcinoma (IDC)-DCIS group. The clinical, imaging and pathological features were analyzed. Mammary Mammo AI fusion model and deep learning-based natural language processing (NLP) structured diagnostic report model were used for image feature extraction. Patients in each group were randomly divided into training set and validation set with a ratio of 6∶4, and the predictors were screened by univariate and multivariate logistic regression analysis. The lowest Akaike information criterion value of each group was selected to construct the final predictive model. The receiver operating characteristic (ROC) curve was drawn to evaluate the performance of each model.Results:Taking estrogen receptor (-) or human epidermal growth factor receptor 2 (3+) as the poor prognostic reference, there were 62 cases considered with poor prognosis and 48 cases with good prognosis in DCIS+DCIS-MI group; while in the IDC-DCIS group, taking the Nottingham prognostic index as the reference, 33 cases were considered with poor prognosis, 73 cases with moderate prognosis, and 57 cases with good prognosis. Four predictive factors were screened to construct the DCIS+DCIS-MI-group predictive model, including DCIS nuclear grade, calcification with suspicious morphology in mammography, DCIS pathologic subtype and DCIS with microinvasion. Five predictive factors were screened to construct the IDC-DCIS-group predictive model, including neural or vascular invasion, Ki67 level, DCIS subtype, DCIS component proportion and associated features in mammography. The area under curve (AUC) for predicting poor prognosis of DCIS+DCIS-MI was 0.92 (95%CI 0.84-1.00) in the training set and 0.90 (95%CI 0.82-0.99) in the validation set; while the AUC for predicting poor prognosis of IDC-DCIS was 0.84 (95%CI 0.76-0.93) in the training set and 0.78 (95%CI 0.64-0.91) in the validation set.Conclusion:The developed models based on deep learning combined with NLP can effectively predict the prognosis of DCIS at different pathological stages, which are beneficial to the risk stratification of patients with DCIS, providing a reference for clinical decision.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA