Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 32
Filter
1.
Med Educ ; 36(10): 931-5, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12390460

ABSTRACT

Practice inevitably narrows over time. Therefore, testing of established doctors requires that their assessment be tailored to a far narrower practice than is appropriate for testing of new doctors who have not yet differentiated. In this paper, we address the conceptual challenges of tailoring physician assessment to individual practice. Testing of established doctors needs to reflect that physicians specialise, often in idiosyncratic ways; otherwise, the testing will not be credible among established doctors and will not reflect the realities of their practice. Despite the importance of these goals, the conceptual and methodological challenges of creating tailored assessments remain daunting.


Subject(s)
Clinical Competence/standards , Education, Medical, Continuing/standards , Physicians, Family/standards , Educational Measurement , Humans , Quality of Health Care/standards
2.
Med Educ ; 36(10): 949-58, 2002 Oct.
Article in English | MEDLINE | ID: mdl-12390463

ABSTRACT

BACKGROUND: If continuing professional development is to work and be sensible, an understanding of clinical practice is needed, based on the daily experiences of doctors within the multiple factors that determine the nature and quality of practice. Moreover, there must be a way to link performance and assessment to ensure that ongoing learning and continuing competence are, in reality, connected. Current understanding of learning no longer holds that a doctor enters practice thoroughly trained with a lifetime's storehouse of knowledge. Rather a doctor's ongoing learning is a 'journey' across a practice lifetime, which involves the doctor as a person, interacting with their patients, other health professionals and the larger societal and community issues. OBJECTIVES: In this paper, we describe a model of learning and practice that proposes how change occurs, and how assessment links practice performance and learning. We describe how doctors define desired performance, compare actual with desired performance, define educational need and initiate educational action. METHOD: To illustrate the model, we describe how doctor performance varies over time for any one condition, and across conditions. We discuss how doctors perceive and respond to these variations in their performance. The model is also used to illustrate different formative and summative approaches to assessment, and to highlight the aspects of performance these can assess. CONCLUSIONS: We conclude by exploring the implications of this model for integrated medical services, highlighting the actions and directions that would be required of doctors, medical and professional organisations, universities and other continuing education providers, credentialling bodies and governments.


Subject(s)
Clinical Competence/standards , Credentialing/standards , Education, Medical, Continuing/standards , Learning , Physicians, Family/standards , Quality of Health Care/standards , Humans
3.
Article in English | MEDLINE | ID: mdl-11435765

ABSTRACT

Clinical skills assessments have traditionally been scored via experts' ratings of examinee performance. However, this approach to scoring may be impractical in a large-scale context due to logistical and cost considerations as well as the increased probability of rater error. The purpose of this investigation was therefore to identify, using discriminant analysis, weighted score-based models that maximize the accuracy with which mastery level can be estimated for examinees taking a nationally administered standardized patient test. Additionally, the accuracy with which the resulting classification functions can be applied to predict mastery level for a cross-validation sample of examinees was also examined. Results suggest that it might be feasible to implement an automated scoring procedure in a cost-effective manner while still retaining the important facets of the decision-making process of expert raters. Cost-benefit, test development and psychometric implications of these results are important and discussed in the full paper.


Subject(s)
Clinical Competence , Educational Measurement/methods , Models, Educational , Discriminant Analysis , Educational Measurement/standards , Humans , Multivariate Analysis , Psychometrics , United States
9.
Med Educ ; 33(6): 439-46, 1999 Jun.
Article in English | MEDLINE | ID: mdl-10354321

ABSTRACT

OBJECTIVES: The purpose of the study was to explore foreign medical graduates' (FMGs) performance on a clinical skills (SPX) examination. The National Board of Medical Examiners (NBME) is in the process of developing an SPX for potential use in the United States Medical Licensing Examination (USMLE). The Educational Commission for Foreign Medical Graduates (ECFMG) is developing the Clinical Skills Assessment (CSA) as an additional requirement for FMGs who wish to be certified by ECFMG. DESIGN: Thirty-three FMGs and 151 United States medical students (USMSs) took the SPX during the winter of 1996 as part of the ongoing pilot studies conducted by the NBME. Four clinical skill areas were assessed: history-taking, physical examination, communication and interpersonal skills. The examination used in this research consisted of 12 cases. The examination utilizes standardized patients (SPs) who are trained to document examinee behaviours and evaluate the communication component of the test. The SPs were also trained to evaluate the English proficiency of the candidates. Candidates were also administered the Test of Spoken English developed by the Educational Testing Services (ETS). SETTING: The examination was conducted in one medical school which served as an SPX centre for NBME pilot studies. SUBJECTS: Thirty-three foreign medical students and 151 US medical students. RESULTS: The indications were that the majority of candidates in both groups felt the examination was moderately fair but 78% of FMGs felt moderately pressed for time, vs. 80% of the USMSs who did not feel pressed for time. Reliabilities obtained for the various SPX components were somewhat higher for the FMGs reflecting the heterogeneity of this group. CONCLUSIONS: The NBME-ECFMG collaborative study yielded important information regarding the NBME SPX prototype as a performance measure for FMGs.


Subject(s)
Clinical Competence , Foreign Medical Graduates , Communication , Educational Measurement , Humans , Medical History Taking , Physical Examination , Physician-Patient Relations , United States
13.
Acad Med ; 72(11): 1008-11, 1997 Nov.
Article in English | MEDLINE | ID: mdl-9387827

ABSTRACT

PURPOSE: As a first step in testing the utility of using trained "standardized examinees" (SEs) as a quality-assurance measure for the scoring process in a standardized-patient (SP) examination, to test whether medical residents could simulate students in an SP examination and perform consistently to specified levels under test conditions. METHOD: Fourth-year students from the Baltimore-Washington Consortium for SPs participated in a National Board of Medical Examiners Prototype Examination of clinical skills consisting of twelve 15-minute student-patient encounters in 1994-95. For this examination, internal medicine residents were trained to act as ordinary candidates and to achieve target scores by performing to a set level on specific checklist items used by SPs for recording interviewing, physical-examination, and communication skills. The "strong SEs" were trained to score 80% correct on six of the examination's 12 cases (study cases), and the "weak SEs" were trained to score 40% correct on the same six cases. The strong and weak SEs' checklist scores on the study cases were compared through independent, two-tailed t-tests. When there was less than 85% agreement on specific checklist items in each case between the SE training and the SP recording, videotapes of the cases were reviewed; in such cases an SE's performance was the final score agreed upon after review. RESULTS: Seven SEs took the SP examination and were not detected by the SPs. There was a total of 84 discrepancies between predicted and recorded checklist scores across 659 checklist items in 40 encounters scored by the SPs. After correcting the discrepancies based on videotape review, the estimated actual mean score was 77.3% for the strong SEs and 44.0% for the weak SEs, and was higher for the strong SEs in each study case. The overall fidelity of the SEs to their training was estimated to be 97%, and the overall SP accuracy was estimated to be 91%. The videotape review revealed 47 training-scoring discrepancies, most in the area of communication skills. CONCLUSION: This study suggests that SEs can be trained to specific performance levels and may be an effective internal control for a high-stakes SP examination. They may also provide a mechanism for refining scoring checklists and for exploring the validity of SP examinations.


Subject(s)
Clinical Competence/standards , Students, Medical/statistics & numerical data , Educational Measurement , Female , Humans , Internal Medicine/standards , Male , Maryland , Physician-Patient Relations , Videotape Recording
19.
Med Educ ; 25(2): 100-9, 1991 Mar.
Article in English | MEDLINE | ID: mdl-2023551

ABSTRACT

The accuracy of standardized patient clinical problem presentation was evaluated by videotape rating of a random sample of 839 student-patient encounters, representing 88 patients, 27 cases and two university test sites. Patient-student encounters were sampled from a collaborative inter-university final-year clinical examination of fourth-year medical students which was conducted at the University of Manitoba and Southern Illinois University in 1987 and 1988. The accuracy, replicability and portability of standardized patient cases were evaluated. The average accuracy of patient presentation was 90.2% in 1987 and 93.4% in 1988. Perfect accuracy scores were obtained by 15 patients; however, 11 patients had average scores below 80% with the accuracy of presentation in some encounters being as low as 30%. There were significant differences in the accuracy score achieved by patients trained together for the same case in 6 of 35 possible comparisons. There was also a systematic trend for patients trained at Southern Illinois to be more accurate in their presentation than patients trained at the University of Manitoba. These differences were significant in 5 of the 15 cases used in the examination.


Subject(s)
Clinical Clerkship , Clinical Competence , Teaching/methods , Evaluation Studies as Topic , Humans , Physician-Patient Relations , Reproducibility of Results
20.
Res Med Educ ; 27: 38-43, 1988.
Article in English | MEDLINE | ID: mdl-3218874

ABSTRACT

The feasibility of using data arising from a simulated patient encounter to determine the nature of the relationship between patient satisfaction and examinee performance during a clinical examination is explored. Satisfaction (on dimensions of sensitivity, participation and thoroughness) is shown to be related to aspects of the physical examination and the history taking.


Subject(s)
Education, Medical, Undergraduate , Physician-Patient Relations , Clinical Competence , Curriculum , Educational Measurement , Humans , Physician's Role , Referral and Consultation
SELECTION OF CITATIONS
SEARCH DETAIL
...