Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Adicionar filtros








Intervalo de ano
1.
Chinese Journal of Medical Imaging Technology ; (12): 1226-1231, 2017.
Artigo em Chinês | WPRIM | ID: wpr-610598

RESUMO

Objective To investigate the value of improving the prediction accuracy of near-term risk for developing breast cancer by transforming the original mammography image and fusing the different types of image features using the algorithm of machine learning.Methods The craniocaudal (CC) full-field digital mammography (FFDM) of 185 women were downloaded from the clinical database at the university of Pittsburgh medical center.Firstly,the original gray images were segmented and transformed into virtual optical density images.Then the asymmetry features were separately extracted from original gray images and virtual optical density images.Two decision tree classifiers of the first stage were trained based on the features extracted from two types of image.And the scores output from the two classifiers were used as input to train the second stage of one decision tree classifier.Leave-one-case-out method was used to validate the prediction performance of near-term risk of breast cancer.Results Using two-stage decision tree fusion method to predict breast cancer,the area under the ROC curve (AUC) was 0.9612±0.0132.And the sensitivity,specificity and prediction accuracy were 96.63%(86/89),91.67%(88/96) and 94.05%(174/185).Conclusion The features extracted from virtual optical density image have higher discriminatory power of predicting breast cancer.Fusing the two kinds of image features twice by two-stage decision tree method can help to improve the prediction accuracy of near-term risk of breast cancer.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA