Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Publication year range
1.
Radiología (Madr., Ed. impr.) ; 62(6): 481-486, nov.-dic. 2020. tab, graf
Article in Spanish | IBECS | ID: ibc-200115

ABSTRACT

OBJETIVO: Determinar el acuerdo intra- e interobservador en la categorización de la densidad mamográfica entre un grupo de profesionales según la 5.a edición del Atlas BI-RADS® - ACR y analizar la concordancia entre la categorización de los expertos y un software comercial de un mamógrafo digital para categorización automática. MÉTODOS: 6 médicos categorizaron la densidad mamográfica de 451 mamografías en dos oportunidades con un intervalo de 1 mes. Calculamos los coeficientes kappa ponderados lineales de acuerdo inter- e intraobservador para el grupo médico y la concordancia entre el software comercial y el reporte de la mayoría. Analizamos los resultados para las cuatro categorías de densidad mamaria y para el resultado dicotómico de mama densa/no densa. RESULTADOS: El acuerdo interobservador entre especialistas y el reporte de la mayoría fue moderado y casi perfecto para el análisis por categoría (kappa = 0,64 a 0,84) y de manera dicotómica (kappa = 0,63 a 0,84). El acuerdo intraobservador fue sustancial y casi perfecto (kappa = 0,68 a 0,85 para 4 categorías y k=0,70 a 0,87 para el análisis dicotómico). El acuerdo entre el reporte de la mayoría y el software comercial fue moderado tanto por categoría (kappa = 0,43) como en el análisis dicotómico (kappa = 0,51). CONCLUSIÓN: Hemos observado un acuerdo entre moderado y casi perfecto inter- e intraobservador entre los radiólogos, según los criterios establecidos en la 5.ª edición del Atlas BI-RADS®. El nivel de acuerdo entre el reporte de los especialistas y un software disponible comercialmente fue moderado


OBJECTIVE: To determine the level of agreement within and between observers in the categorization of breast density on mammograms in a group of professionals using the fifth edition of the American College of Radiology's BI-RADS® Atlas and to analyze the concordance between experts' categorization and automatic categorization by commercial software on digital mammograms. METHODS: Six radiologists categorized breast density on 451 mammograms on two occasions one month apart. We calculated the linear weighted kappa coefficients for inter- and intra-observer agreement for the group of radiologists and between the commercial software and the majority report. We analyzed the results for the four categories of breast density and for dichotomous classification as dense versus not dense. RESULTS: The interobserver agreement among radiologists and the majority report was between moderate and nearly perfect for the analysis by category (Kappa = 0.64 to 0.84) and for the dichotomous classification (Kappa = 0.63 to 0.84). The intraobserver agreement was between substantial and nearly perfect (Kappa = 0.68 to 0.85 for 4 categories and k=0.70 to 0.87 for the dichotomous classification). The agreement between the majority report and the commercial software was moderate both for the four categories (Kappa = 0.43) and for the dichotomous classification (Kappa = 0.51). CONCLUSION: Agreement on breast density within and between radiologists using the criteria established in the fifth edition of the BI-RADS® Atlas was between moderate and nearly perfect. The level of agreement between the specialists and the commercial software was moderate


Subject(s)
Humans , Female , Observer Variation , Professional Competence , Breast/diagnostic imaging , Breast Density , Mammography , Cross-Sectional Studies
2.
Radiologia (Engl Ed) ; 62(6): 481-486, 2020.
Article in English, Spanish | MEDLINE | ID: mdl-32493654

ABSTRACT

OBJECTIVE: To determine the level of agreement within and between observers in the categorization of breast density on mammograms in a group of professionals using the fifth edition of the American College of Radiology's BI-RADS® Atlas and to analyze the concordance between experts' categorization and automatic categorization by commercial software on digital mammograms. METHODS: Six radiologists categorized breast density on 451 mammograms on two occasions one month apart. We calculated the linear weighted kappa coefficients for inter- and intra-observer agreement for the group of radiologists and between the commercial software and the majority report. We analyzed the results for the four categories of breast density and for dichotomous classification as dense versus not dense. RESULTS: The interobserver agreement among radiologists and the majority report was between moderate and nearly perfect for the analysis by category (κ=0.64 to 0.84) and for the dichotomous classification (κ=0.63 to 0.84). The intraobserver agreement was between substantial and nearly perfect (κ=0.68 to 0.85 for 4 categories and k=0.70 to 0.87 for the dichotomous classification). The agreement between the majority report and the commercial software was moderate both for the four categories (κ=0.43) and for the dichotomous classification (κ=0.51). CONCLUSION: Agreement on breast density within and between radiologists using the criteria established in the fifth edition of the BI-RADS® Atlas was between moderate and nearly perfect. The level of agreement between the specialists and the commercial software was moderate.


Subject(s)
Breast Density , Breast Neoplasms , Mammography , Breast Neoplasms/diagnostic imaging , Humans , Mammography/methods , Observer Variation , Radiologists , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...