Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
J Nurs Educ ; 51(2): 66-73, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22132718

ABSTRACT

The purpose of this article is to summarize the methods and findings from three different approaches examining the reliability and validity of data from the Lasater Clinical Judgment Rubric (LCJR) using human patient simulation. The first study, by Adamson, assessed the interrater reliability of data produced using the LCJR using intraclass correlation (2,1). Interrater reliability was calculated to be 0.889. The second study, by Gubrud-Howe, used the percent agreement strategy for assessing interrater reliability. Results ranged from 92% to 96%. The third study, by Sideras, used level of agreement for reliability analyses. Results ranged from 57% to 100%. Findings from each of these studies provided evidence supporting the validity of the LCJR for assessing clinical judgment during simulated patient care scenarios. This article provides extensive information about psychometrics and appropriate use of the LCJR and concludes with recommendations for further psychometric assessment and use of the LCJR.


Subject(s)
Clinical Competence , Education, Nursing , Educational Measurement/methods , Judgment , Patient Simulation , Humans , Observer Variation , Psychometrics , Reproducibility of Results , United States
2.
J Nurs Educ ; 50(10): 583-6, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21751763

ABSTRACT

Human patient simulation (HPS) is increasingly being used as both a teaching and an evaluation strategy in nursing education. To meaningfully evaluate student performance in HPS activities, nurse educators must be equipped with valid and reliable instruments for measuring student performance. This study used a novel method, including leveled, video-archived simulation scenarios, a virtual classroom, and webinar and e-mail communication, to assess the reliability and internal consistency of data produced using the Creighton Simulation Evaluation Instrument. The interrater reliability, calculated using intraclass correlation (2,1) and 95% confidence interval, was 0.952 (0.697, 0.993). The intrarater reliability, calculated using intraclass correlation (3,1) and 95% confidence interval, was 0.883 (-0.001, 0.992), and the internal consistency, calculated using Cronbach's alpha, was α = 0.979. This article includes a sample of the instrument and provides valuable resources and reliability data for nurse educators and researchers interested in measuring student performance in HPS activities.


Subject(s)
Education, Nursing, Baccalaureate , Educational Measurement/methods , Patient Simulation , Humans , Psychometrics , Reproducibility of Results , United States , Video Recording , Webcasts as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...