Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Breast ; 15(1): 44-51, 2006 Feb.
Article in English | MEDLINE | ID: mdl-16076556

ABSTRACT

The inter- and intraobserver agreement (K statistic) in reporting according to BI-RADS assessment categories was tested on 12 dedicated breast radiologists, with little prior working knowledge of BI-RADS, reading a set of 50 lesions (29 malignant, 21 benign). Intraobserver agreement (four categories: R2, R3, R4, R5) was fair (0.21-0.40), moderate (0.41-0.60), substantial (0.61-0.80) or almost perfect (>0.80) for one, two, five or four radiologists, or (six categories: R2, R3, R4a, R4b, R4c, R5) fair, moderate, substantial or almost perfect for three, three, three or three radiologists, respectively. Interobserver agreement (four categories) was fair, moderate or substantial for three, six, or three radiologists, or (six categories) slight, fair or moderate for one, six, or five radiologists. Major disagreement occurred for intermediate categories (R3=0.12, R4=0.25, R4a=0.08, R4b=0.07, R4c=0.10). We found insufficient intra- and interobserver consistency of breast radiologists in reporting BI-RADS assessment categories. Although training may improve these results, simpler alternative reporting methods (systems), focused on clinical decision-making, should be explored.


Subject(s)
Breast Neoplasms/diagnostic imaging , Mammography/statistics & numerical data , Mammography/standards , Female , Humans , Observer Variation , Predictive Value of Tests , Reproducibility of Results , Sensitivity and Specificity
2.
Breast ; 14(4): 269-75, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16085233

ABSTRACT

The inter- and intraobserver agreement (kappa-statistic) in reporting according to Breast Imaging Reporting and Data System (BI-RADS((R))) breast density categories was tested in 12 dedicated breast radiologists reading a digitized set of 100 two-view mammograms. Average intraobserver agreement was substantial (kappa=0.71, range 0.32-0.88) on a four-grade scale (D1/D2/D3/D4) and almost perfect (kappa=0.81, range 0.62-1.00) on a two-grade scale (D1-2/D3-4). Average interobserver agreement was moderate (kappa=0.54, range 0.02-0.77) on a four-grade scale and substantial (kappa=0.71, range 0.31-0.88) on a two-grade scale. Major disagreement was found for intermediate categories (D2=0.25, D3=0.28). Categorization of breast density according to BI-RADS is feasible and consistency is good within readers and reasonable between readers. Interobserver inconsistency does occur, and checking the adoption of proper criteria through a proficiency test and appropriate training might be useful. As inconsistency is probably due to erroneous perception of classification criteria, standard sets of reference images should be made available for training.


Subject(s)
Breast Neoplasms/diagnostic imaging , Mammography/standards , Female , Humans , Observer Variation , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...