Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
World J Urol ; 41(12): 3745-3751, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37882808

ABSTRACT

BACKGROUND: Feedback is important for surgical trainees but it can be biased and time-consuming. We examined crowd-sourced assessment as an alternative to experienced surgeons' assessment of robot-assisted radical prostatectomy (RARP). METHODS: We used video recordings (n = 45) of three RARP modules on the RobotiX, Simbionix simulator from a previous study in a blinded comparative assessment study. A group of crowd workers (CWs) and two experienced RARP surgeons (ESs) evaluated all videos with the modified Global Evaluative Assessment of Robotic Surgery (mGEARS). RESULTS: One hundred forty-nine CWs performed 1490 video ratings. Internal consistency reliability was high (0.94). Inter-rater reliability and test-retest reliability were low for CWs (0.29 and 0.39) and moderate for ESs (0.61 and 0.68). In an Analysis of Variance (ANOVA) test, CWs could not discriminate between the skill level of the surgeons (p = 0.03-0.89), whereas ES could (p = 0.034). CONCLUSION: We found very low agreement between the assessments of CWs and ESs when they assessed robot-assisted radical prostatectomies. As opposed to ESs, CWs could not discriminate between surgical experience using the mGEARS ratings or when asked if they wanted the surgeons to perform their robotic surgery.


Subject(s)
Robotic Surgical Procedures , Robotics , Surgeons , Male , Humans , Reproducibility of Results , Prostatectomy
2.
J Endourol ; 35(8): 1265-1272, 2021 08.
Article in English | MEDLINE | ID: mdl-33530867

ABSTRACT

Purpose: To investigate validity evidence for a simulator-based test in robot-assisted radical prostatectomy (RARP). Materials and Methods: The test consisted of three modules on the RobotiX Mentor VR-simulator: Bladder Neck Dissection, Neurovascular Bundle Dissection, and Ureterovesical Anastomosis. Validity evidence was investigated by using Messick's framework by including doctors with different RARP experience: novices (who had assisted for RARP), intermediates (robotic surgeons, but not RARP surgeons), or experienced (RARP surgeons). The simulator metrics were analyzed, and Cronbach's alpha and generalizability theory were used to explore reliability. Intergroup comparisons were done with mixed-model, repeated measurement analysis of variance and the correlation between the number of robotic procedures and the mean test score were examined. A pass/fail score was established by using the contrasting groups' method. Results: Ten novices, 11 intermediates, and 6 experienced RARP surgeons were included. Six metrics could discriminate between groups and showed acceptable internal consistency reliability, Cronbach's alpha = 0.49, p < 0.001. Test-retest reliability was 0.75, 0.85, and 0.90 for one, two, and three repetitions of tests, respectively. Six metrics were combined into a simulator score that could discriminate between all three groups, p = 0.002, p < 0.001, and p = 0.029 for novices vs intermediates, novices vs experienced, and intermediates vs experienced, respectively. Total number of robotic operations and the mean score of the three repetitions were significantly correlated, Pearson's r = 0.74, p < 0.001. Conclusion: This study provides validity evidence for a simulator-based test in RARP. We determined a pass/fail level that can be used to ensure competency before proceeding to supervised clinical training.


Subject(s)
Robotic Surgical Procedures , Robotics , Virtual Reality , Clinical Competence , Humans , Male , Prostatectomy , Reproducibility of Results
3.
Scand J Urol ; 53(5): 319-324, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31538510

ABSTRACT

Objectives: A prospective observational trial to develop and gather validity evidence using Messick's framework for a simulator-based test in TURB. Methods: Forty-nine doctors were recruited from urology departments (Herlev/Gentofte University Hospital, Rigshospitalet Copenhagen University Hospital and Zealand University Hospital Roskilde) and enrolled from April to September 2018. The TURB Mentor™ virtual reality (VR) simulator was assessed at an expert meeting selecting clinically relevant cases and metrics. Test sessions were done on identical simulators at two university hospitals in Denmark. All participants performed three TURB procedures on the VR simulator. Simulator metrics were analysed with analysis of variance (ANOVA) and metrics with the ability to discriminate between groups were combined in a total simulator score. Finally, a pass/fail score was identified using the contrasting groups' method.Results: Eleven simulator metrics were found eligible and four had significant discrimination ability between competency levels: resected pathology (%) (p = 0.008); cutting in bladder wall (n) (p = 0.004); time (s) (p = 0.034); and inspection of the bladder wall (%) (p = 0.002). The internal structure of the total simulator score [(resected pathology*inspection of the bladder wall)/time] was high with the intraclass correlation coefficient, Cronbach's alpha = 0.85. The mean total simulator score was significantly lower in the novice group than in the intermediate, 15.9 and 25.6, respectively (mean difference = 9.7, p = 0.011) and experienced group, 30.6 (mean difference = 14.7, p < 0.001). A pass/fail score of 22 was identified.Conclusion: We found validity evidence for a newly developed VR simulator-based test and establised a pass/fail score identifying surgical skills in TURB. The TURBEST test can be used in a proficiency-based TURB simulator training programme for accreditation prior to supervised procedures on patients.


Subject(s)
Clinical Competence , Cystectomy/education , Simulation Training , Urinary Bladder Neoplasms/surgery , Adult , Cystectomy/methods , Female , Humans , Male , Middle Aged , Prospective Studies , Urethra , Virtual Reality
4.
JMIR Mhealth Uhealth ; 3(3): e79, 2015 Jul 27.
Article in English | MEDLINE | ID: mdl-26215371

ABSTRACT

BACKGROUND: Both clinicians and patients use medical mobile phone apps. Anyone can publish medical apps, which leads to contents with variable quality that may have a serious impact on human lives. We herein provide an overview of the prevalence of expert involvement in app development and whether or not app contents adhere to current medical evidence. OBJECTIVE: To systematically review studies evaluating expert involvement or adherence of app content to medical evidence in medical mobile phone apps. METHODS: We systematically searched 3 databases (PubMed, The Cochrane Library, and EMBASE), and included studies evaluating expert involvement or adherence of app content to medical evidence in medical mobile phone apps. Two authors performed data extraction independently. Qualitative analysis of the included studies was performed. RESULTS: Based on inclusion criteria, 52 studies were included in this review. These studies assessed a total of 6520 apps. Studies dealt with a variety of medical specialties and topics. As much as 28 studies assessed expert involvement, which was found in 9-67% of the assessed apps. Thirty studies (including 6 studies that also assessed expert involvement) assessed adherence of app content to current medical evidence. Thirteen studies found that 10-87% of the assessed apps adhered fully to the compared evidence (published studies, recommendations, and guidelines). Seventeen studies found that none of the assessed apps (n=2237) adhered fully to the compared evidence. CONCLUSIONS: Most medical mobile phone apps lack expert involvement and do not adhere to relevant medical evidence.

SELECTION OF CITATIONS
SEARCH DETAIL
...