Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Chiropr Man Therap ; 27: 38, 2019.
Article in English | MEDLINE | ID: mdl-31321028

ABSTRACT

Background: Clinical education forms a substantial component of health professional education. Increased cohorts in Australian osteopathic education have led to consideration of alternatives to traditional placements to ensure adequate clinical exposure and learning opportunities. Simulated learning offers a new avenue for sustainable clinical education. The aim of the study was to explore whether directed observation of simulated scenarios, as part replacement of clinical hours, could provide an equivalent learning experience as measured by performance in an objective structured clinical examination (OSCE). Methods: The year 3 osteopathy cohort were invited to participate in replacement of 50% of their clinical placement hours with online facilitated, video-based simulation exercises (intervention). Competency was assessed by an OSCE at the end of the teaching period. Inferential statistics were used to explore any differences between the control and intervention groups as a post-test control design. Results: The funding model allowed ten learners to participate in the intervention, with sixty-six in the control group. Only one OSCE item was significantly different between groups, that being technique selection (p = 0.038, d = 0.72) in favour of the intervention group, although this may be a type 1 error. Grade point average was moderately positively correlated with the manual therapy technique station total score (r = 0.35, p < 0.01) and a trivial relationship with the treatment reasoning station total score (r = 0.17, p = 0.132). Conclusions: The current study provides support for further investigation into part replacement of clinical placements with directed observation of simulated scenarios in osteopathy.


Subject(s)
Computer Simulation , Health Personnel/education , Osteopathic Medicine/education , Adult , Australia , Clinical Competence , Cohort Studies , Education, Distance , Female , Health Personnel/psychology , Humans , Male , Problem-Based Learning
2.
Adv Simul (Lond) ; 3: 21, 2018.
Article in English | MEDLINE | ID: mdl-30455991

ABSTRACT

INTRODUCTION: There is no standard approach to determining the realism of a simulator, valuable information when planning simulation training. The aim of this research was to design a generic simulator realism questionnaire and investigate the contributions of different elements of simulator design to a user's impression of simulator realism and performance. METHODS: A questionnaire was designed with procedure-specific and non-procedure-specific (global) questions, grouped in subscales related to simulator structure and function. Three intrauterine contraceptive device (IUCD) simulators were selected for comparison. Participants were doctors of varying experience, who performed an IUCD insertion on each of the three models and used the questionnaire to rate the realism and importance of each aspect of the simulators. The questionnaire was evaluated by correlation between procedure-specific and global items and the correlation of these items to overall realism scores. Realism scores for each simulator were compared by Kruskal-Wallis and subsequent between-simulator comparison by Dunn's test. RESULTS: Global question scores were highly related to procedure-specific scores. Comparison revealed global item subscale scores were significantly different across models on each of the nine subscales (P < 0.001). Function items were rated of higher importance than structure items (mean function item importance 5.36 versus mean structure item importance 5.02; P = 0.009). CONCLUSIONS: The designed questionnaire was able to discriminate between the models for perceived simulator realism. Findings from this study may assist simulator design and inform future development of a generic questionnaire for assessing user perceptions of simulator realism.

3.
Med J Aust ; 207(10): 453, 2017 Nov 20.
Article in English | MEDLINE | ID: mdl-29129176

ABSTRACT

OBJECTIVE: The fitness to practise of international medical graduates (IMGs) is usually evaluated with standardised assessment tests. The performance rather than the competency of practising doctors should, however, be assessed, for which reason workplace-based assessment (WBA) has gained increasing attention. Our aim was to assess the composite reliability of WBA instruments for assessing IMGs. DESIGN AND SETTING: Between June 2010 and April 2015, 142 IMGs were assessed by 99 calibrated assessors; each was assessed in the workplace over 6 months. The IMGs completed 970 case-based discussions (CBDs), 1741 mini-clinical examination exercises (mini-CEX), and 1020 multi-source feedback (MSF) assessments. PARTICIPANTS: 103 male and 39 female candidates from 28 countries (Africa, Asia, Europe, South America, South Pacific) in urban and rural hospitals of the Hunter New England Health region. MAIN OUTCOME MEASURES: The composite reliability across the three WBA tools, expressed as the standard error of measurement (SEM). RESULTS: In our WBA program, a combination of five CBD and 12 mini-CEX assessments achieved an SEM of 0.33, greater than the threshold 0.26 of a scale point. Adding six MSF results to the assessment package reduced the SEM to 0.24, which is adequately precise. CONCLUSIONS: Combining data from different WBA assessment instruments achieves acceptable reliability for assessing IMGs, provided that the panel of WBA assessment types are carefully selected and the assessors are calibrated.


Subject(s)
Clinical Competence , Employee Performance Appraisal/methods , Foreign Medical Graduates/standards , Australia , Female , Humans , Male , Reproducibility of Results
4.
Simul Healthc ; 12(5): 304-307, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28609316

ABSTRACT

INTRODUCTION: Large loop excision of the transformation zone (LLETZ) is a common gynecological treatment for cervical dysplasia but can be challenging to teach. There is no widely adopted simulator for this procedure in Australia, so a new low-fidelity simulator was designed and evaluated. METHOD: A simulator for a LLETZ procedure was developed. Doctors (N = 29), varied in experience level in gynecology at a tertiary hospital, performed a LLETZ procedure using the simulator. The procedures were filmed, and two independent assessors rated the deidentified videos. The assessment involved a checklist (of crucial procedural steps) and global rating scale to evaluate whether the simulator facilitated the demonstration of LLETZ procedure skills. Participants completed a questionnaire evaluating the performance and utility of the simulator to determine participant perceptions of simulator realism and acceptability. RESULTS: The participant questionnaire revealed positive evaluations of realism and acceptability of the simulator. Performance scores were significantly different across experience levels (P < 0.001) with post hoc pairwise comparison between levels confirming significant differences between each group in assessed simulator performance for global rating scale and overall performance scores. The interrater reliability of the assessors was high (0.84). CONCLUSIONS: A low-fidelity simulator for a LLETZ procedure seems to adequately demonstrate procedural performance reflecting doctor experience level. Participant questionnaire responses were positive, supporting further evaluation of the simulator for use in training.


Subject(s)
Gynecologic Surgical Procedures/education , Simulation Training/methods , Australia , Clinical Competence , Female , Gynecologic Surgical Procedures/methods , Humans , Reproducibility of Results , Uterine Cervical Dysplasia/surgery
5.
Med Teach ; 37(2): 146-52, 2015 Feb.
Article in English | MEDLINE | ID: mdl-24989363

ABSTRACT

BACKGROUND: Benchmarking among medical schools is essential, but may result in unwanted effects. AIM: To apply a conceptual framework to selected benchmarking activities of medical schools. METHODS: We present an analogy between the effects of assessment on student learning and the effects of benchmarking on medical school educational activities. A framework by which benchmarking can be evaluated was developed and applied to key current benchmarking activities in Australia and New Zealand. RESULTS: The analogy generated a conceptual framework that tested five questions to be considered in relation to benchmarking: what is the purpose? what are the attributes of value? what are the best tools to assess the attributes of value? what happens to the results? and, what is the likely "institutional impact" of the results? If the activities were compared against a blueprint of desirable medical graduate outcomes, notable omissions would emerge. CONCLUSION: Medical schools should benchmark their performance on a range of educational activities to ensure quality improvement and to assure stakeholders that standards are being met. Although benchmarking potentially has positive benefits, it could also result in perverse incentives with unforeseen and detrimental effects on learning if it is undertaken using only a few selected assessment tools.


Subject(s)
Benchmarking/organization & administration , Educational Measurement/standards , Schools, Medical/standards , Australia , Humans , Learning , New Zealand , Quality Improvement/organization & administration
7.
J Adv Nurs ; 68(10): 2331-40, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22332974

ABSTRACT

AIM: This article reports a longitudinal study examining how nursing students learn on clinical placements in three cohorts of undergraduates at a large Australian university. BACKGROUND: Preceptorship models of clinical learning are increasing in popularity as a strategy to maximize collaboration between university and healthcare organizations. A clinical education model, underpinned by preceptorship, was offered by an Australian university in partnership with a tertiary healthcare organization to some students. DESIGN: The study utilized a mixed method approach of surveys and interviews. METHOD: It was hypothesized that students participating in the preceptorship partnership model would have more positive perceptions of the clinical learning environment than students participating in other models of clinical education. Data were collected over 3 years, from 2006-2008, using a modified Clinical Learning Environment Inventory from second (n = 396) and third (n = 263) year nursing students. Students were classified into three groups based on which educational model they received. RESULTS: On the inventory factor, 'Student centredness', a Welch test indicated an important difference between the responses of students in the three groups. Games-Howell post hoc test indicated that students in the clinical preceptorship partnership model responded more positively than students who had both a clinical teacher and a preceptor in a non-preceptorship partnership model. CONCLUSION: Developing sustainable approaches to enhance the clinical learning environment experience for student nurses is an international concern. The significance of continuity of clinical teachers to the contribution of student centredness is an important aspect to be considered.


Subject(s)
Education, Nursing/methods , Preceptorship/methods , Teaching/methods , Humans , Learning , Longitudinal Studies , Models, Educational , Nursing Education Research , Victoria
8.
J Adv Nurs ; 66(6): 1371-81, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20546367

ABSTRACT

AIM: This paper is a report of the psychometric testing of the Clinical Learning Environment Inventory. BACKGROUND: The clinical learning environment is a complex socio-cultural entity that offers a variety of opportunities to engage or disengage in learning. The Clinical Learning Environment Inventory is a self-report instrument consisting of 42 items classified into six scales: personalization, student involvement, task orientation, innovation, satisfaction and individualization. It was developed to examine undergraduate nursing students' perceptions of the learning environment whilst on placement in clinical settings. METHOD: As a component of a longitudinal project, Bachelor of Nursing students (n = 659) from two campuses of a university in Australia, completed the Clinical Learning Environment Inventory from 2006 to 2008. Principal components analysis using varimax rotation was conducted to explore the factor structure of the inventory. RESULTS: Data for 513 students (77%) were eligible for inclusion. Constraining data to a 6-factor solution explained 51% of the variance. The factors identified were: student-centredness, affordances and engagement, individualization, fostering workplace learning, valuing nurses' work, and innovative and adaptive workplace culture. These factors were reviewed against recent theoretical developments in the literature. CONCLUSION: The study offers an empirically based and theoretically informed extension of the original Clinical Learning Environment Inventory, which had previously relied on ad hoc clustering of items and the use of internal reliability of its sub-scales. Further research is required to establish the consistency of these new factors.


Subject(s)
Education, Nursing, Baccalaureate/standards , Nursing Education Research , Students, Nursing/psychology , Australia , Education, Nursing, Baccalaureate/methods , Factor Analysis, Statistical , Humans , Psychometrics/methods , Workplace
9.
Aust Health Rev ; 32(2): 292-300, 2008 May.
Article in English | MEDLINE | ID: mdl-18447816

ABSTRACT

To determine perceived barriers to continuing education for Australian hospital-based prevocational doctors, a cross sectional cohort survey was distributed to medical administrators for secondary redistribution to 2607 prevocational doctors from August 2003 to October 2004. Four hundred and seventy valid questionnaires (18.1%) were returned. Only seven per cent (33/470) did not identify any barriers to continuing education. Barriers identified the most were lack of time (85% [371/437]), clinical commitment (65% [284/437]), resistance from registrars (13% [57/437]) and resistance from consultant staff (10% [44/437]). Other barriers included workload issues (27% [27/98]), teaching program inadequacies (26% [25/98]), lack of protected time for education (17% [17/98]), motivational issues (11% [10/98]) and geographic remoteness (10% [10/98]). Australian graduates (87%) identified lack of time more frequently than international medical graduates (77%) (P = 0.036). Perceived barriers did not differ significantly between doctors of differing postgraduate years.


Subject(s)
Education, Medical, Continuing , Health Facility Administrators , Medical Staff, Hospital/education , Attitude of Health Personnel , Australia , Cohort Studies , Cross-Sectional Studies , Humans , Internship and Residency , Surveys and Questionnaires , Workload
10.
Med J Aust ; 186(S7): S33-6, 2007 04 02.
Article in English | MEDLINE | ID: mdl-17407421

ABSTRACT

The new curriculum framework for doctors in postgraduate years 1 and 2 is a step towards seamless medical education. The framework will need additional components to make "the curriculum" deliverable. Assessment is an essential element of most curricula, and assessment systems should be carefully planned. Diligent observation and rating in the workplace may provide a suitable approach. In the future, Australia must also thoroughly engage with the debate on continuing validation of competence.


Subject(s)
Clinical Competence/standards , Medical Staff, Hospital/education , Medical Staff, Hospital/standards , Australia , Curriculum , Forecasting , Humans , Internal Medicine/education , Models, Theoretical , Peer Review, Health Care , United Kingdom , United States
11.
Med J Aust ; 184(9): 436-40, 2006 May 01.
Article in English | MEDLINE | ID: mdl-16646742

ABSTRACT

OBJECTIVE: To survey prevocational doctors working in Australian hospitals on aspects of postgraduate learning. PARTICIPANTS AND SETTING: 470 prevocational doctors in 36 health services in Australia, August 2003 to October 2004. DESIGN: Cross-sectional cohort survey with a mix of ordinal multicategory questions and free text. MAIN OUTCOME MEASURES: Perceived preparedness for aspects of clinical practice; perceptions of the quantity and usefulness of current teaching and learning methods and desired future exposure to learning methods. RESULTS: 64% (299/467) of responding doctors felt generally prepared for their job, 91% (425/469) felt prepared for dealing with patients, and 70% (325/467) for dealing with relatives. A minority felt prepared for medicolegal problems (23%, 106/468), clinical emergencies (31%, 146/469), choosing a career (40%, 188/468), or performing procedures (45%, 213/469). Adequate contact with registrars was reported by 90% (418/465) and adequate contact with consultants by 56% (257/466); 20% (94/467) reported exposure to clinical skills training and 11% (38/356) to high-fidelity simulation. Informal registrar contact was described as useful or very useful by 94% (433/463), and high-fidelity simulation by 83% (179/216). Most prevocational doctors would prefer more formal instruction from their registrars (84%, 383/456) and consultants (81%, 362/447); 84% (265/316) want increased exposure to high-fidelity simulation and 81% (283/350) to professional college tutorials. CONCLUSION: Our findings should assist planning and development of training programs for prevocational doctors in Australian hospitals.


Subject(s)
Attitude of Health Personnel , Education, Medical, Graduate/statistics & numerical data , Hospitalists/education , Hospitalists/statistics & numerical data , Australia , Career Choice , Clinical Competence , Cohort Studies , Cross-Sectional Studies , Education, Medical, Graduate/methods , Health Care Surveys , Health Knowledge, Attitudes, Practice , Humans , Internship and Residency/methods , Interprofessional Relations , Learning , Needs Assessment
12.
Med J Aust ; 184(7): 346-8, 2006 Apr 03.
Article in English | MEDLINE | ID: mdl-16584370

ABSTRACT

The lack of cohesion across health and education sections and national and state jurisdictions is counterproductive to effective national policies in medical education and training. Existing systems in Australia for medical education and training lack coordination, and are under resourced and under pressure. There is a need for a coordinated national approach to assessment of international medical graduates, and for meeting their education and training needs. The links between prevocational and vocational training must be improved. Tensions between workforce planning, education and training can only be resolved if workforce and training agencies work collaboratively. All prevocational positions should be designed and structured to ensure that service, training, teaching and research are appropriately balanced. There is a need for more health education research in Australia.


Subject(s)
Education, Medical/organization & administration , Education, Medical/trends , Models, Organizational , Needs Assessment , Organizational Innovation , Australia , Clinical Competence , Curriculum , Educational Measurement/methods , Foreign Medical Graduates , Humans , Internship and Residency/organization & administration
13.
Ann R Coll Surg Engl ; 87(4): 242-7, 2005 Jul.
Article in English | MEDLINE | ID: mdl-16053681

ABSTRACT

INTRODUCTION: The objectives were to: (i) establish how 'typical' consultant surgeons perform on 'generic' (non-specialist) surgical simulations before their use in the General Medical Council's Performance Procedures (PPs); (ii) measure any differences in performance between specialties; and (iii) compare the performance of group of surgeons in the PPs with the 'typical' group. VOLUNTEERS AND METHODS: Seventy-four consultant volunteers in gastrointestinal surgery (n=21), vascular surgery (n=11), urology (n=10), orthopaedics (n=15), cardiothoracic surgery (n=10) and plastic surgery (n=7), plus 9 surgeons undertaking phase 2 of the PPs undertook 7 simple simulations in the skills laboratory. The scores of the volunteers were analysed by simulation and specialty using ANOVA. The scores of the volunteers were then compared with the scores of the surgeons in the PPs. RESULTS: There were significant differences between simulations, but most volunteers achieved scores of 75-100%. There was a significant simulation by specialty interaction indicating that the scores of some specialties differed on some simulations. The scores of the group of surgeons in the PPs were significantly lower than the reference group for most simulations. CONCLUSIONS: Simple simulations can be used to assess the basic technical skills of consultant surgeons. The simulation by specialty interaction suggests that whilst some skills may be generic, others are not. The lower scores of the surgeons in the PPs suggest that these tests possess criterion validity, i.e. they may help to determine when poor performance is due to lack of technical competence.


Subject(s)
Educational Measurement/methods , Specialties, Surgical/standards , Adult , Analysis of Variance , Clinical Competence/standards , Female , Humans , Male , Middle Aged , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL
...