Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Med Educ ; 41(12): 1178-84, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18045370

ABSTRACT

CONTEXT: Each clinical encounter represents an amazing series of psychological events: perceiving the features of the situation; quickly accessing relevant hypotheses; checking for signs and symptoms that confirm or rule out competing hypotheses, and using related knowledge to guide appropriate investigations and treatment. OBJECTIVE: Script theory, issued from cognitive psychology, provides explanations of how these events are mentally processed. This essay is aimed at clinical teachers who are interested in basic sciences of education. It describes the script concept and how it applies in medicine via the concept of the 'illness script'. METHODS: Script theory asserts that, to give meaning to a new situation in our environment, we use goal-directed knowledge structures adapted to perform tasks efficiently. These integrated networks of prior knowledge lead to expectations, as well as to inferences and actions. Expectations and actions embedded in scripts allow subjects to make predictions about features that may or may not be encountered in a situation, to check these features in order to adequately interpret (classify) the situation, and to act appropriately. CONCLUSIONS: Theory raises questions about how illness scripts develop and are refined with clinical experience. It also provides a framework to assist their acquisition.


Subject(s)
Clinical Competence , Decision Making , Diagnosis , Education, Medical, Undergraduate/methods , Teaching/methods , Adult , Humans , Knowledge
2.
Teach Learn Med ; 15(1): 2-5, 2003.
Article in English | MEDLINE | ID: mdl-12632701

ABSTRACT

All in all, the evidence is not convincing. Only four of the nine randomized studies used the conventional small-group learning paradigm and qualify as studies of small-group learning, which are relevant to medical education. The results of one of the four are impossible to interpret because of the involvement of the investigator in teaching and test construction. The three remaining studies showed no effect, a negative effect, and a positive effect, respectively. The nonrandomized studies failed to establish the comparability of the groups. The evidence does not support the authors' call for "more widespread implementation of small-group learning in undergraduate SMET". Small-group learning has not been shown to support the acquisition of content any better [or worse] than large-group learning. In medical education, small-groups are employed in large part to develop team work skills, communication skills, and peer- and self-assessment skills. But these outcomes are not addressed in this meta-analysis. More seriously, our rereading of these studies raises general concerns about meta-analysis in education, which have important implications for evidence-based medical education. The meta-analysis under discussion at first appeared to be just the kind needed to guide an evidence-based educational enterprise. However, a closer look revealed both what is lacking in the meta-analysis and some of the ways educational research and reporting need to be changed if anything like evidence-based education is ever to become a reality. At the least, study design must be clearly described. In addition, if the design is nonrandomized, the groups should be described in sufficient detail to allow a meaningful interpretation of the role of preexisting differences on the outcome measures. (This is why we limited our discussion here to the randomized studies.) Also, effect-size measures should be reported for all comparisons that bear on the impact of the intervention, including preexisting differences. Reporting significance is not enough. This shows only whether sampling error can be ruled out (with a low probability of error, p < .05) as a possible explanation of the connection between the intervention and the outcome. The effect can still be trivial and the comparisons confounded. In addition, descriptions of the actual educational interventions employed need to be more comprehensive and precise. For the most part, the papers would have been strengthened by providing more information for replicating the studies and for deciding which should be included in a given meta-analysis. Perhaps most seriously, our rereading of these studies makes us wonder about the possibility of meaningfully synthesizing the results of educational studies, given their idiosyncrasies and their many extraneous, uncontrolled factors. The conclusions from most educational studies, then--whether randomized or not--must be highly qualified, with explicit warnings about preexisting differences and other confounding factors that plausibly account for the study results. However, these narrative qualifications do nothing to adjust the effect-size measures, which are typically pooled or synthesized across studies--confounds and all. The idiosyncrasies of the studies seem to preclude a blanket qualification that can be applied conceptually across the collection of studies to arrive at a sound conclusion from the synthesis. In brief, the meta-analysis considered here does not support the application of small-group learning in medical education and it raises questions about meta-analysis in education with implications for evidence-based education.


Subject(s)
Education, Medical, Undergraduate/methods , Group Processes , Learning , Teaching/methods , Curriculum , Evidence-Based Medicine/methods , Humans , Meta-Analysis as Topic , Program Evaluation/methods , Random Allocation
SELECTION OF CITATIONS
SEARCH DETAIL
...