Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Diagnosis (Berl) ; 10(2): 158-163, 2023 05 01.
Article in English | MEDLINE | ID: mdl-36797838

ABSTRACT

OBJECTIVES: Collective intelligence, the "wisdom of the crowd," seeks to improve the quality of judgments by aggregating multiple individual inputs. Here, we evaluate the success of collective intelligence strategies applied to probabilistic diagnostic judgments. METHODS: We compared the performance of individual and collective intelligence judgments on two series of clinical cases requiring probabilistic diagnostic assessments, or "forecasts". We assessed the quality of forecasts using Brier scores, which compare forecasts to observed outcomes. RESULTS: On both sets of cases, the collective intelligence answers outperformed nearly every individual forecaster or team. The improved performance by collective intelligence was mediated by both improved resolution and calibration of probabilistic assessments. In a secondary analysis looking at the effect of varying number of individual inputs in collective intelligence answers from two different data sources, nearly identical curves were found in the two data sets showing 11-12% improvement when averaging two independent inputs, 15% improvement averaging four independent inputs, and small incremental improvements with further increases in number of individual inputs. CONCLUSIONS: Our results suggest that the application of collective intelligence strategies to probabilistic diagnostic forecasts is a promising approach to improve diagnostic accuracy and reduce diagnostic error.


Subject(s)
Intelligence , Judgment , Humans , Diagnostic Errors
2.
Acad Med ; 91(1): 94-100, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26726864

ABSTRACT

PURPOSE: The ability to create a concise summary statement can be assessed as a marker for clinical reasoning. The authors describe the development and preliminary validation of a rubric to assess such summary statements. METHOD: Between November 2011 and June 2014, four researchers independently coded 50 summary statements randomly selected from a large database of medical students' summary statements in virtual patient cases to each create an assessment rubric. Through an iterative process, they created a consensus assessment rubric and applied it to 60 additional summary statements. Cronbach alpha calculations determined the internal consistency of the rubric components, intraclass correlation coefficient (ICC) calculations determined the interrater agreement, and Spearman rank-order correlations determined the correlations between rubric components. Researchers' comments describing their individual rating approaches were analyzed using content analysis. RESULTS: The final rubric included five components: factual accuracy, appropriate narrowing of the differential diagnosis, transformation of information, use of semantic qualifiers, and a global rating. Internal consistency was acceptable (Cronbach alpha 0.771). Interrater reliability for the entire rubric was acceptable (ICC 0.891; 95% confidence interval 0.859-0.917). Spearman calculations revealed a range of correlations across cases. Content analysis of the researchers' comments indicated differences in their application of the assessment rubric. CONCLUSIONS: This rubric has potential as a tool for feedback and assessment. Opportunities for future study include establishing interrater reliability with other raters and on different cases, designing training for raters to use the tool, and assessing how feedback using this rubric affects students' clinical reasoning skills.


Subject(s)
Education, Medical, Undergraduate/methods , Educational Measurement/methods , Problem-Based Learning , Students, Medical , Writing , Databases, Factual , Humans , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...