Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
J Appl Clin Med Phys ; 24(8): e13995, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37073484

ABSTRACT

PURPOSE: Hazard scenarios were created to assess and reduce the risk of planning errors in automated planning processes. This was accomplished through iterative testing and improvement of examined user interfaces. METHODS: Automated planning requires three user inputs: a computed tomography (CT), a prescription document, known as the service request, and contours. We investigated the ability of users to catch errors that were intentionally introduced into each of these three stages, according to an FMEA analysis. Five radiation therapists each reviewed 15 patient CTs, containing three errors: inappropriate field of view, incorrect superior border, and incorrect identification of isocenter. Four radiation oncology residents reviewed 10 service requests, containing two errors: incorrect prescription and treatment site. Four physicists reviewed 10 contour sets, containing two errors: missing contour slices and inaccurate target contour. Reviewers underwent video training prior to reviewing and providing feedback for various mock plans. RESULTS: Initially, 75% of hazard scenarios were detected in the service request approval. The visual display of prescription information was then updated to improve the detectability of errors based on user feedback. The change was then validated with five new radiation oncology residents who detected 100% of errors present. 83% of the hazard scenarios were detected in the CT approval portion of the workflow. For the contour approval portion of the workflow none of the errors were detected by physicists, indicating this step will not be used for quality assurance of contours. To mitigate the risk from errors that could occur at this step, radiation oncologists must perform a thorough review of contour quality prior to final plan approval. CONCLUSIONS: Hazard testing was used to pinpoint the weaknesses of an automated planning tool and as a result, subsequent improvements were made. This study identified that not all workflow steps should be used for quality assurance and demonstrated the importance of performing hazard testing to identify points of risk in automated planning tools.


Subject(s)
Radiotherapy Planning, Computer-Assisted , Tomography, X-Ray Computed , Humans , Radiotherapy Planning, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
2.
J Appl Clin Med Phys ; 23(8): e13704, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35791594

ABSTRACT

PURPOSE: Knowledge-based planning (KBP) has been shown to be an effective tool in quality control for intensity-modulated radiation therapy treatment planning and generating high-quality plans. Previous studies have evaluated its ability to create consistent plans across institutions and between planners within the same institution as well as its use as teaching tool for inexperienced planners. This study evaluates whether planning quality is consistent when using a KBP model to plan across different treatment machines. MATERIALS AND METHODS: This study used a RapidPlan model (Varian Medical Systems) provided by the vendor, to which we added additional planning objectives, maximum dose limits, and planning structures, such that a clinically acceptable plan is achieved in a single optimization. This model was used to generate and optimize volumetric-modulated arc therapy plans for a cohort of 50 patients treated for head-neck cancer. Plans were generated using the following treatment machines: Varian 2100, Elekta Versa HD, and Varian Halcyon. A noninferiority testing methodology was used to evaluate the hypothesis that normal and target metrics in our autoplans were no worse than a set of clinically-acceptable baseline plans by a margin of 1.8 Gy or 3% dose-volume. The quality of these plans were also compared through the use of common clinical dose-volume histogram criteria. RESULTS: The Versa HD met our noninferiority criteria for 23 of 34 normal and target metrics; while the Halcyon and Varian 2100 machines met our criteria for 24 of 34 and 26 of 34 metrics, respectively. The experimental plans tended to have less volume coverage for prescription dose planning target volume and larger hotspot volumes. However, comparable plans were generated across different treatment machines. CONCLUSIONS: These results support the use of a head-neck RapidPlan models in centralized planning workflows that support clinics with different linac models/vendors, although some fine-tuning for targets may be necessary.


Subject(s)
Head and Neck Neoplasms , Radiotherapy, Intensity-Modulated , Head and Neck Neoplasms/radiotherapy , Humans , Knowledge Bases , Organs at Risk , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods
3.
J Appl Clin Med Phys ; 23(9): e13694, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35775105

ABSTRACT

PURPOSE: To develop a checklist that improves the rate of error detection during the plan review of automatically generated radiotherapy plans. METHODS: A custom checklist was developed using guidance from American Association of Physicists in Medicine task groups 275 and 315 and the results of a failure modes and effects analysis of the Radiation Planning Assistant (RPA), an automated contouring and treatment planning tool. The preliminary checklist contained 90 review items for each automatically generated plan. In the first study, eight physicists were recruited from our institution who were familiar with the RPA. Each physicist reviewed 10 artificial intelligence-generated resident treatment plans from the RPA for safety and plan quality, five of which contained errors. Physicists performed plan checks, recorded errors, and rated each plan's clinical acceptability. Following a 2-week break, physicists reviewed 10 additional plans with a similar distribution of errors using our customized checklist. Participants then provided feedback on the usability of the checklist and it was modified accordingly. In a second study, this process was repeated with 14 senior medical physics residents who were randomly assigned to checklist or no checklist for their reviews. Each reviewed 10 plans, five of which contained errors, and completed the corresponding survey. RESULTS: In the first study, the checklist significantly improved the rate of error detection from 3.4 ± 1.1 to 4.4 ± 0.74 errors per participant without and with the checklist, respectively (p = 0.02). Error detection increased by 20% when the custom checklist was utilized. In the second study, 2.9 ± 0.84 and 3.5 ± 0.84 errors per participant were detected without and with the revised checklist, respectively (p = 0.08). Despite the lack of statistical significance for this cohort, error detection increased by 18% when the checklist was utilized. CONCLUSION: Our results indicate that the use of a customized checklist when reviewing automated treatment plans will result in improved patient safety.


Subject(s)
Radiotherapy Planning, Computer-Assisted , Radiotherapy, Intensity-Modulated , Artificial Intelligence , Checklist , Humans , Radiotherapy Dosage , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods
4.
Pract Radiat Oncol ; 12(4): e344-e353, 2022.
Article in English | MEDLINE | ID: mdl-35305941

ABSTRACT

PURPOSE: In this study, we applied the failure mode and effects analysis (FMEA) approach to an automated radiation therapy contouring and treatment planning tool to assess, and subsequently limit, the risk of deploying automated tools. METHODS AND MATERIALS: Using an FMEA, we quantified the risks associated with the Radiation Planning Assistant (RPA), an automated contouring and treatment planning tool currently under development. A multidisciplinary team identified and scored each failure mode, using a combination of RPA plan data and experience for guidance. A 1-to-10 scale for severity, occurrence, and detectability of potential errors was used, following American Association of Physicists in Medicine Task Group 100 recommendations. High-risk failure modes were further explored to determine how the workflow could be improved to reduce the associated risk. RESULTS: Of 290 possible failure modes, we identified 126 errors that were unique to the RPA workflow, with a mean risk priority number (RPN) of 56.3 and a maximum RPN of 486. The top 10 failure modes were caused by automation bias, operator error, and software error. Twenty-one failure modes were above the action threshold of RPN = 125, leading to corrective actions. The workflow was modified to simplify the user interface and better training resources were developed, which highlight the importance of thorough review of the output of automated systems. After the changes, we rescored the high-risk errors, resulting in a final mean and maximum RPN of 33.7 and 288, respectively. CONCLUSIONS: We identified 126 errors specific to the automated workflow, most of which were caused by automation bias or operator error, which emphasized the need to simplify the user interface and ensure adequate user training. As a result of changes made to the software and the enhancement of training resources, the RPNs subsequently decreased, showing that FMEA is an effective way to assess and reduce risk associated with the deployment of automated planning tools.


Subject(s)
Healthcare Failure Mode and Effect Analysis , Automation , Humans , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...