Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
2.
Colorectal Dis ; 22(12): 2232-2242, 2020 12.
Article in English | MEDLINE | ID: mdl-32663361

ABSTRACT

AIM: The aim was to develop and operationally define 'performance metrics' that characterize a reference approach to robotic-assisted low anterior resection (RA-LAR) and to obtain face and content validity through a consensus meeting. METHOD: Three senior colorectal surgeons with robotic experience and a senior behavioural scientist formed the Metrics Group. We used published guidelines, training materials, manufacturers' instructions and unedited videos of RA-LAR to deconstruct the operation into defined, measurable components - performance metrics (i.e. procedure phases, steps, errors and critical errors). The performance metrics were then subjected to detailed critique by 18 expert colorectal surgeons in a modified Delphi process. RESULTS: Performance metrics for RA-LAR had 15 procedure phases, 128 steps, 89 errors and 117 critical errors in women, 88 errors and 118 critical errors in men. After the modified Delphi process the final performance metrics consisted of 14 procedure phases, 129 steps, 88 errors and 115 critical errors in women, 87 errors and 116 critical errors in men. After discussion by the Delphi panel, all procedure phases received unanimous consensus apart from phase I (patient positioning and preparation, 83%) and phase IV (docking, 94%). CONCLUSION: A robotic rectal operation can be broken down into procedure phases, steps, with errors and critical errors, known as performance metrics. The face and content of these metrics have been validated by a large group of expert robotic colorectal surgeons from Europe. We consider the metrics essential for the development of a structured training curriculum and standardized procedural assessment for RA-LAR.


Subject(s)
Robotic Surgical Procedures , Benchmarking , Clinical Competence , Consensus , Delphi Technique , Female , Humans , Male
4.
Anaesthesia ; 72(9): 1117-1124, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28741649

ABSTRACT

The objective of this study was to examine the effect of metrics-based vs. non-metrics-based feedback on novices learning predefined competencies for acquisition and interpretation of sonographic images relevant to performance of ultrasound-guided axillary brachial plexus block. Twelve anaesthetic trainees were randomly assigned to either metrics-based-feedback or non-metrics-based feedback groups. After a common learning phase, all participants attempted to perform a predefined task that involved scanning the left axilla of a single volunteer. Following completion of the task, all participants in each group received feedback from a different expert in regional blocks (consultant anaesthetist) and were allowed to practise the predefined task for up to 1 h. Those in the metrics-based feedback group received feedback based on previously validated metrics, and they practised each metric item until it was performed satisfactorily, as assessed by the supervising consultant. Subsequently, each participant attempted to perform ultrasonography of the left axilla on the same volunteer. Two trained consultant anaesthetists independently scored the video recording pre- and post-feedback scans using the validated metrics list. Both groups showed improvement from pre-feedback to post-feedback scores. Compared with participants in the non-metrics-based feedback group, those in the metrics-based feedback group completed more steps: median (IQR [range]) 18.8 (1.5 [17-20]) vs. 14.3 (4.5 [11-18.5]), p = 0.009, and made fewer errors 0.5 (1 [0-1.5]) vs. 1.5 (2 [1-6]), p = 0.041 postfeedback. In this study, novices' sonographic skills showed greater improvement when feedback was combined with validated metrics.


Subject(s)
Anesthesiology/education , Brachial Plexus/diagnostic imaging , Clinical Competence , Nerve Block/methods , Ultrasonography, Interventional , Adult , Axilla/diagnostic imaging , Feedback , Female , Hospitals, Teaching , Humans , Internship and Residency , Male , Observer Variation , Young Adult
5.
Anaesthesia ; 71(11): 1324-1331, 2016 11.
Article in English | MEDLINE | ID: mdl-27634361

ABSTRACT

The purpose of this study was to examine the construct validity and reliability of a novel metrics-based assessment tool, previously developed for ultrasound-guided axillary brachial plexus block. Five expert and eight novice anaesthetists performed a total of 18 ultrasound-guided axillary brachial plexus blocks on the same number of patients. A trained investigator video-taped procedures according to a pre-defined protocol. Two trained consultant anaesthetists independently scored the videos using the assessment tool. Compared with novices, experts completed more steps (mean 41.0 vs. 33.1, p = 0.001), had fewer procedural errors (2.8 vs. 7.9, p < 0.0001), had fewer critical errors (0.8 vs. 1.3, p = 0.030), and fewer total errors (3.5 vs. 9.1, p < 0.0001). The mean inter-rater reliability for scoring of experts' performance was 0.91, for novices' performance was 0.84, and for all performance combined (n = 18) was 0.88. This assessment tool is valid, and discriminates reliably between expert and novice performance for placement of ultrasound-guided axillary brachial plexus blocks.


Subject(s)
Brachial Plexus Block/standards , Brachial Plexus/diagnostic imaging , Clinical Competence , Ultrasonography, Interventional/standards , Adult , Anesthesiology/education , Brachial Plexus Block/methods , Education, Medical, Graduate , Educational Measurement/methods , Female , Humans , Ireland , Male , Middle Aged , Observer Variation , Reproducibility of Results , Ultrasonography, Interventional/methods , Videotape Recording
7.
J Electrocardiol ; 47(6): 895-906, 2014.
Article in English | MEDLINE | ID: mdl-25110276

ABSTRACT

INTRODUCTION: It is well known that accurate interpretation of the 12-lead electrocardiogram (ECG) requires a high degree of skill. There is also a moderate degree of variability among those who interpret the ECG. While this is the case, there are no best practice guidelines for the actual ECG interpretation process. Hence, this study adopts computerized eye tracking technology to investigate whether eye-gaze can be used to gain a deeper insight into how expert annotators interpret the ECG. Annotators were recruited in San Jose, California at the 2013 International Society of Computerised Electrocardiology (ISCE). METHODS: Each annotator was recruited to interpret a number of 12-lead ECGs (N=12) while their eye gaze was recorded using a Tobii X60 eye tracker. The device is based on corneal reflection and is non-intrusive. With a sampling rate of 60Hz, eye gaze coordinates were acquired every 16.7ms. Fixations were determined using a predefined computerized classification algorithm, which was then used to generate heat maps of where the annotators looked. The ECGs used in this study form four groups (3=ST elevation myocardial infarction [STEMI], 3=hypertrophy, 3=arrhythmias and 3=exhibiting unique artefacts). There was also an equal distribution of difficulty levels (3=easy to interpret, 3=average and 3=difficult). ECGs were displayed using the 4x3+1 display format and computerized annotations were concealed. RESULTS: Precisely 252 expert ECG interpretations (21 annotators×12 ECGs) were recorded. Average duration for ECG interpretation was 58s (SD=23). Fleiss' generalized kappa coefficient (Pa=0.56) indicated a moderate inter-rater reliability among the annotators. There was a 79% inter-rater agreement for STEMI cases, 71% agreement for arrhythmia cases, 65% for the lead misplacement and dextrocardia cases and only 37% agreement for the hypertrophy cases. In analyzing the total fixation duration, it was found that on average annotators study lead V1 the most (4.29s), followed by leads V2 (3.83s), the rhythm strip (3.47s), II (2.74s), V3 (2.63s), I (2.53s), aVL (2.45s), V5 (2.27s), aVF (1.74s), aVR (1.63s), V6 (1.39s), III (1.32s) and V4 (1.19s). It was also found that on average the annotator spends an equal amount of time studying leads in the frontal plane (15.89s) when compared to leads in the transverse plane (15.70s). It was found that on average the annotators fixated on lead I first followed by leads V2, aVL, V1, II, aVR, V3, rhythm strip, III, aVF, V5, V4 and V6. We found a strong correlation (r=0.67) between time to first fixation on a lead and the total fixation duration on each lead. This indicates that leads studied first are studied the longest. There was a weak negative correlation between duration and accuracy (r=-0.2) and a strong correlation between age and accuracy (r=0.67). CONCLUSIONS: Eye tracking facilitated a deeper insight into how expert annotators interpret the 12-lead ECG. As a result, the authors recommend ECG annotators to adopt an initial first impression/pattern recognition approach followed by a conventional systematic protocol to ECG interpretation. This recommendation is based on observing misdiagnoses given due to first impression only. In summary, this research presents eye gaze results from expert ECG annotators and provides scope for future work that involves exploiting computerized eye tracking technology to further the science of ECG interpretation.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Artificial Intelligence , Electrocardiography/methods , Eye Movements/physiology , Fixation, Ocular/physiology , Visual Perception/physiology , Adult , Clinical Competence , Female , Humans , Male , Reading
9.
Surg Endosc ; 21(2): 220-4, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17200909

ABSTRACT

BACKGROUND: In the acquisition of new skills that are difficult to master, such as those required for laparoscopy, feedback is a crucial component of the learning experience. Optimally, feedback should accurately reflect the task performance to be improved and be proximal to the training experience. In surgery, however, feedback typically is in vivo. The development of virtual reality training systems currently offers new training options. This study investigated the effect of feedback type and quality on laparoscopic skills acquisition. METHODS: For this study, 32 laparoscopic novices were prospectively randomized into four training conditions, with 8 in each group. Group 1 (control) had no feedback. Group 2 (buzzer) had audio feedback when the edges were touched. Group 3 (voiced error) had an examiner voicing the word "error" each time the walls were touched. Group 4 (both) received both the audio buzzer and "error" voiced by the examiner All the subjects performed a maze-tracking task with a laparoscopic stylus inserted through a 5-mm port to simulate the fulcrum effect in minimally invasive surgery (MIS). A computer connected to the stylus scored an error each time the edge of the maze was touched, and the subjects were made aware of the error in the aforementioned manner. Ten 2-min trials were performed by the subjects while viewing a monitor. At the conclusion of training, all the subjects completed a 2-min trial of a simple laparoscopic cutting task, with the number of correct and incorrect incisions recorded. RESULTS: Group 4 (both) made significantly more correct incisions than the other three groups (F = 12.13; df = 3, 28; p < 0.001), and also made significantly fewer errors or incorrect incisions (F = 14.4; p < 0.0001). Group 4 also made three times more correct incisions and 7.4 times fewer incorrect incisions than group 1 (control). CONCLUSIONS: The type and quality of feedback during psychomotor skill acquisition for MIS have a large effect on the strength of skills generalization to a simple MIS task and should be given serious consideration in curriculum design for surgical training using simulation tasks.


Subject(s)
Education, Medical, Undergraduate/methods , Feedback, Psychological , Laparoscopy , Minimally Invasive Surgical Procedures/education , Psychomotor Performance/physiology , Adult , Analysis of Variance , Clinical Competence , Computer Simulation , Educational Technology/methods , Female , Humans , Male , Probability , Prospective Studies , Students, Medical , User-Computer Interface
10.
Surg Endosc ; 21(1): 5-10, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17111280

ABSTRACT

BACKGROUND: The Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) has been well validated as a training device for laparoscopic skills. It has been demonstrated that training to a level of proficiency on the simulator significantly improves operating room performance of laparoscopic cholecystectomy. The purpose of this project was to obtain a national standard of proficiency using the MIST-VR based on the performance of experienced laparoscopic surgeons. METHODS: Surgeons attending the Society of American Gastrointestinal Endoscopic Surgeons (SAGES) 2004 Annual Scientific Meeting who had performed more than 100 laparoscopic procedures volunteered to participate. All the subjects completed a demographic questionnaire assessing laparoscopic and MIST-VR experience in the learning center of the SAGES 2004 meeting. Each subject performed two consecutive trials of the MIST-VR Core Skills 1 program at the medium setting. Each trial involved six basic tasks of increasing difficulty: acquire place (AP), transfer place (TP), traversal (TV), withdrawal insert (WI), diathermy task (DT), and manipulate diathermy (MD). Trial 1 was considered a "warm-up," and trial 2 functioned as the test trial proper. Subject performance was scored for time, errors, and economy of instrument movement for each task, and a cumulative total score was calculated. RESULTS: Trial 2 data are expressed as mean time in seconds in Table 2. CONCLUSION: Proficiency levels for laparoscopic skills have now been established on a national scale by experienced laparoscopic surgeons using the MIST-VR simulator. Residency programs, training centers, and practicing surgeons can now use these data as guidelines for performance criterion during MIST-VR skills training.


Subject(s)
Clinical Competence , Computer Simulation , Educational Measurement , Laparoscopy , Minimally Invasive Surgical Procedures/education , User-Computer Interface , Adult , Humans , Middle Aged , Surveys and Questionnaires
11.
Surg Innov ; 12(3): 233-7, 2005 Sep.
Article in English | MEDLINE | ID: mdl-16224644

ABSTRACT

OBJECTIVE: Laparoscopic intracorporeal knot tying is a difficult skill to acquire. Currently, time to complete a knot is the most commonly used metric to assess the acquisition of this skill; however, without a measure of knot quality, time is a poor indicator of skills mastery. Others have shown that knot quality can be accurately assessed with a tensiometer, but obtaining this type of assessment has typically been cumbersome. We investigated a new method of real-time assessment of knot quality that allows for more practical use of knot quality as a performance metric. METHODS: Eleven experienced endoscopic surgeons tied 100 intracorporeal knots in a standard box trainer. Each of the knots was immediately tested using the InSpec 2200 benchtop tensiometer (INSTRON, Canton MA) where a knot quality score (KQS) is generated based on the load handling properties of the knotted suture. The execution time was also recorded for each knot. RESULTS: The assessment of all knots ended with one of two end points: knots that slipped (n=48) or knots that held until the suture broke (n=52). Knots that slip are generally of poorer quality than those that held. Execution time did not correlate with knot-quality score (r=0.009, P=.9), and the mean execution time did not differ significantly between slipped and held knots (65 vs 68 seconds, P=.8). No completion time criteria were able to accurately predict slipped versus held knots. The mean KQS difference between held and slipped knots was highly significant (24 vs 12, P<.0001). A knot with a KQS exceeding 20 was nearly 10 times more likely to hold than slip. CONCLUSION: Time to complete a knot is a poor metric for the objective assessment of intracorporeal knot-tying performance in the absence of a measure of knot quality. Real-time evaluation of the knot quality can accurately distinguish well-tied knots from poorly tied knots. This mode of assessment should be incorporated into training curriculum for surgical knot tying.


Subject(s)
Clinical Competence , Laparoscopy/methods , Suture Techniques , Biomechanical Phenomena , Evaluation Studies as Topic , Female , Humans , Male , Manometry , Sensitivity and Specificity , Sutures , Task Performance and Analysis , Tensile Strength , Time Factors
12.
Surg Endosc ; 19(9): 1227-31, 2005 Sep.
Article in English | MEDLINE | ID: mdl-16025195

ABSTRACT

BACKGROUND: The use of simulation for minimally invasive surgery (MIS) skills training has many advantages over current traditional methods. One advantage of simulation is that it enables an objective assessment of technical performance. The purpose of this study was to determine whether the ProMIS augmented reality simulator could objectively distinguish between levels of performance skills on a complex laparoscopic suturing task. METHODS: Ten subjects--five laparoscopic experts and five laparoscopic novices--were assessed for baseline perceptual, visio-spatial, and psychomotor abilities using validated tests. After three trials of a novel laparoscopic suturing task were performed on the simulator, measures for time, smoothness of movement, and path distance were analyzed for each trial. Accuracy and errors were evaluated separately by two blinded reviewers to an interrater reliability of >0.8. Comparisons of mean performance measures were made between the two groups using a Mann-Whitney U test. Internal consistency of ProMIS measures was assessed with coefficient alpha. RESULTS: The psychomotor performance of the experts was superior at baseline assessment (p < 0.001). On the laparoscopic suturing task, the experts performed significantly better than the novices across all three trials (p < 0.001). They performed the tasks between three and four times faster (p < 0.0001), had three times shorter instrument path length (p < 0.0001), and had four times greater smoothness of instrument movement (p < 0.009). Experts also showed greater consistency in their performance, as demonstrated by SDs across all measures, which were four times smaller than the novice group. Observed internal consistency of ProMIS measures was high (alpha = 0.95, p < 0.00001). CONCLUSIONS: Preliminary results of construct validation efforts of the ProMIS simulator show that it can distinguish between experts and novices and has promising psychometric properties. The attractive feature of ProMIS is that a wide variety of MIS tasks can be used to train and assess technical skills.


Subject(s)
Clinical Competence , Computer Simulation , Laparoscopy/standards , Suture Techniques/standards , Computers , Equipment Design , Laparoscopes
13.
Am Surg ; 71(1): 13-20; discussion 20-1, 2005 Jan.
Article in English | MEDLINE | ID: mdl-15757051

ABSTRACT

Given the dynamic nature of modern surgical education, determining factors that may improve the efficiency of laparoscopic training is warranted. The objective of this study was to analyze whether perceptual, visuo-spatial, or psychomotor aptitude are related to the amount of training required to reach specific performance-based goals on a virtual reality surgical simulator. Sixteen MS4 medical students participated in an elective skills course intended to train laparoscopic skills. All were tested for perceptual, visuo-spatial, and psychomotor aptitude using previously validated psychological tests. Training involved as many instructor-guided 1-hour sessions as needed to reach performance goals on a custom designed MIST-VR manipulation-diathermy task (Mentice AB, Gothenberg, Sweden). Thirteen subjects reached performance goals by the end of the course. Two were excluded from analysis due to previous experience with the MIST-VR (total n = 11). Perceptual ability (r = -0.76, P = 0.007) and psychomotor skills (r = 0.62, P = 0.04) significantly correlated with the number of trials required. Visuo-spatial ability did not significantly correlate with training duration. The number of trials required to train subjects to performance goals on the MIST-VR manipulation diathermy task is significantly related to perceptual and psychomotor aptitude.


Subject(s)
Laryngoscopy , Psychomotor Performance , Students, Medical/psychology , Surgical Procedures, Operative/education , User-Computer Interface , Adult , Aptitude , Clinical Competence , Computer Simulation , Education, Medical, Graduate/methods , Educational Measurement , Educational Technology/methods , Female , Humans , Male , Spatial Behavior , Time Factors
14.
Qual Saf Health Care ; 13 Suppl 1: i19-26, 2004 Oct.
Article in English | MEDLINE | ID: mdl-15465950

ABSTRACT

The major determinant of a patient's safety and outcome is the skill and judgment of the surgeon. While knowledge base and decision processing are evaluated during residency, technical skills-which are at the core of the profession-are not evaluated. Innovative state of the art simulation devices that train both surgical tasks and skills, without risk to patients, should allow for the detection and analysis of errors and "near misses". Studies have validated the use of a sophisticated endoscopic sinus surgery simulator (ES3) for training residents on a procedural basis. Assessments are proceeding as to whether the integration of a comprehensive ES3 training programme into the residency curriculum will have long term effects on surgical performance and patient outcomes. Using various otolaryngology residencies, subjects are exposed to mentored training on the ES3 as well as to minimally invasive trainers such as the MIST-VR. Technical errors are identified and quantified on the simulator and intraoperatively. Through a web based database, individual performance can be compared against a national standard. An upgraded version of the ES3 will be developed which will support patient specific anatomical models. This advance will allow study of the effects of simulated rehearsal of patient specific procedures (mission rehearsal) on patient outcomes and surgical errors during the actual procedure. The information gained from these studies will help usher in the next generation of surgical simulators that are anticipated to have significant impact on patient safety.


Subject(s)
Computer-Assisted Instruction , Education, Medical/methods , Medical Errors/prevention & control , Patient Simulation , Quality Assurance, Health Care , Curriculum , Humans , Professional Competence , United States
15.
Surg Endosc ; 18(4): 592-5, 2004 Apr.
Article in English | MEDLINE | ID: mdl-15026914

ABSTRACT

BACKGROUND: The determination of laparoscopic surgeon ability is essential to training error avoidance. The present study describes a practical method of surgical error analysis. METHODS: After review of practice videotapes of the excisional phase of laparoscopic cholecystectomy, consensus on the identification of eight errors was achieved. Interrater agreement at the end of this phase was 84-96%. Fourteen study videotapes of gallbladder excision were then observed independently by expert reviewers blinded to surgical team identity. Procedures were assessed using a scoring matrix of 1-min segments with each error reported each minute. RESULTS: Interrater agreement was 84-100% for all error categories. CONCLUSIONS: The present study demonstrates that excellent interrater agreement of procedural errors can be achieved by carefully defining and training recognition of targeted events. Extension of this simple and reliable analysis tool to other procedures should be feasible to define behaviors leading to adverse clinical outcomes.


Subject(s)
Cholecystectomy, Laparoscopic/statistics & numerical data , Medical Errors , Burns/etiology , Cholecystectomy, Laparoscopic/adverse effects , Clinical Competence , Dissection/adverse effects , Electrocoagulation/adverse effects , Feasibility Studies , General Surgery/education , Humans , Internship and Residency , Intraoperative Complications/etiology , Liver/injuries , Medical Errors/statistics & numerical data , Observer Variation , Reproducibility of Results , Retrospective Studies , Single-Blind Method , Videotape Recording
16.
Surg Endosc ; 18(4): 660-5, 2004 Apr.
Article in English | MEDLINE | ID: mdl-15026925

ABSTRACT

BACKGROUND: Increasing constraints on the time and resources needed to train surgeons have led to a new emphasis on finding innovative ways to teach surgical skills outside the operating room. Virtual reality training has been proposed as a method to both instruct surgical students and evaluate the psychomotor components of minimally invasive surgery ex vivo. METHODS: The performance of 100 laparoscopic novices was compared to that of 12 experienced (>50 minimally invasive procedures) and 12 inexperienced (<10 minimally invasive procedures) laparoscopic surgeons. The values of the experienced surgeons' performance were used as benchmark comparators (or criterion measures). Each subject completed six tasks on the Minimally Invasive Surgical Trainer-Virtual Reality (MIST-VR) three times. The outcome measures were time to complete the task, number of errors, economy of instrument movement, and economy of diathermy. RESULTS: After three trials, the mean performance of the medical students approached that of the experienced surgeons. However, 7-27% of the scores of the students fell more than two SD below the mean scores of the experienced surgeons (the criterion level). CONCLUSIONS: The MIST-VR system is capable of evaluating the psychomotor skills necessary in laparoscopic surgery and discriminating between experts and novices. Furthermore, although some novices improved their skills quickly, a subset had difficulty acquiring the psychomotor skills. The MIST-VR may be useful in identifying that subset of novices.


Subject(s)
Clinical Competence , Computer Simulation , Minimally Invasive Surgical Procedures/education , Models, Anatomic , User-Computer Interface , Adult , Benchmarking , Diathermy , Female , Humans , Male , Middle Aged , Minimally Invasive Surgical Procedures/instrumentation , Physicians/psychology , Psychomotor Performance , Students/psychology , Students, Medical/psychology , Task Performance and Analysis
19.
Surg Endosc ; 17(9): 1468-71, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12802664

ABSTRACT

BACKGROUND: Laparoscopic surgery requires surgeons to infer the shape of 3-D structures, such as the internal organs of patients, from 2-D displays on a video monitor. Recent evidence indicates that the issue is not resolved by the use of contemporary 3-D camera systems. It is therefore crucial to find ways of measuring differences in aptitude for recovering 3-D structure from 2-D images, and assessing its impact on performance. Our aim was to test empirically for a relationship between laparoscopic ability and the perceptual skill of recovering information about 3-D structures from 2-D monitor displays. METHODS: Participants in three studies completed a simulated laparoscopic cutting task as well as the Pictorial Surface Orientation (PicSOr)3 Test. In studies 1 (n = 48) and 2 (n = 32) both groups were laparoscopic novices, and in study 3 (n = 34) 18 of the participants were experienced laparoscopic surgeons. FINDINGS: All three studies showed that PicSOr consistently predicted the laparoscopic performance of participants on the laparoscopic cutting task (study 1, r = 0.5, p < 0.0003; study 2, r = 0.5, p < 0.004; and study 3, r = 0.42, p = 0.017). Furthermore, it was also a significant predictor of laparoscopic surgeons' performance (r = 0.54, p = 0.047). INTERPRETATIONS: This is the first objective perceptual psychometric test to reliably predict laparoscopic technical skills. PicSOr provides a tool for assessing which trainees have the potential to learn minimal access surgery.


Subject(s)
Clinical Competence , Depth Perception , Laparoscopy , Man-Machine Systems , Neuropsychological Tests , Physicians/psychology , Psychomotor Performance , Adult , Cholecystectomy, Laparoscopic , Data Display , Female , Functional Laterality , Humans , Male , Models, Anatomic
20.
Surg Endosc ; 16(12): 1746-52, 2002 Dec.
Article in English | MEDLINE | ID: mdl-12140641

ABSTRACT

BACKGROUND: The objective assessment of the psychomotor skills of surgeons is now a priority; however, this is a difficult task because of measurement difficulties associated with the assessment of surgery in vivo. In this study, virtual reality (VR) was used to overcome these problems. METHODS: Twelve experienced (>50 minimal-access procedures), 12 inexperienced laparoscopic surgeons (<10 minimal-access procedures), and 12 laparoscopic novices participated in the study. Each subject completed 10 trials on the Minimally Invasive Surgical Trainer; Virtual Reality (MIST VR). RESULTS: Experienced laparoscopic surgeons performed the tasks significantly (p < 0.01) faster, with less error, more economy in the movement of instruments and the use of diathermy, and with greater consistency in performance. The standardized coefficient alpha for performance measures ranged from a = 0.89 to 0.98, showing high internal measurement consistency. Test-retest reliability ranged from r = 0.96 to r = 0.5. CONCLUSION: VR is a useful tool for evaluating the psychomotor skills needed to perform laparoscopic surgery.


Subject(s)
Clinical Competence , Laparoscopy/methods , Learning , Minimally Invasive Surgical Procedures/education , Psychomotor Performance , User-Computer Interface , Adult , Clinical Competence/statistics & numerical data , Equipment and Supplies , Humans , Laparoscopy/statistics & numerical data , Middle Aged , Minimally Invasive Surgical Procedures/statistics & numerical data , Reference Standards , Statistics, Nonparametric
SELECTION OF CITATIONS
SEARCH DETAIL
...