Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Appl Clin Med Phys ; 25(5): e14313, 2024 May.
Article in English | MEDLINE | ID: mdl-38650177

ABSTRACT

BACKGROUND: This study utilizes interviews of clinical medical physicists to investigate self-reported shortcomings of the current weekly chart check workflow and opportunities for improvement. METHODS: Nineteen medical physicists were recruited for a 30-minute semi-structured interview, with a particular focus placed on image review and the use of automated tools for image review in weekly checks. Survey-type questions were used to gather quantitative information about chart check practices and importance placed on reducing chart check workloads versus increasing chart check effectiveness. Open-ended questions were used to probe respondents about their current weekly chart check workflow, opinions of the value of weekly chart checks and perceived shortcomings, and barriers and facilitators to the implementation of automated chart check tools. Thematic analysis was used to develop common themes across the interviews. RESULTS: Physicists ranked highly the value of reducing the time spent on weekly chart checks (average 6.3 on a scale from 1 to 10), but placed more value on increasing the effectiveness of checks with an average of 9.2 on a 1-10 scale. Four major themes were identified: (1) weekly chart checks need to adapt to an electronic record-and-verify chart environment, (2) physicists could add value to patient care by analyzing images without duplicating the work done by physicians, (3) greater support for trending analysis is needed in weekly checks, and (4) automation has the potential to increase the value of physics checks. CONCLUSION: This study identified several key shortcomings of the current weekly chart check process from the perspective of the clinical medical physicist. Our results show strong support for automating components of the weekly check workflow in order to allow for more effective checks that emphasize follow-up, trending, failure modes and effects analysis, and allow time to be spent on other higher value tasks that improve patient safety.


Subject(s)
Workflow , Humans , Health Physics , Surveys and Questionnaires , Image Processing, Computer-Assisted/methods , Automation , Quality Assurance, Health Care/standards , Interviews as Topic/methods
2.
Article in English | MEDLINE | ID: mdl-38485098

ABSTRACT

PURPOSE: Present knowledge of patient setup and alignment errors in image guided radiation therapy (IGRT) relies on voluntary reporting, which is thought to underestimate error frequencies. A manual retrospective patient-setup misalignment error search is infeasible owing to the bulk of cases to be reviewed. We applied a deep learning-based misalignment error detection algorithm (EDA) to perform a fully automated retrospective error search of clinical IGRT databases and determine an absolute gross patient misalignment error rate. METHODS AND MATERIALS: The EDA was developed to analyze the registration between planning scans and pretreatment cone beam computed tomography scans, outputting a misalignment score ranging from 0 (most unlikely) to 1 (most likely). The algorithm was trained using simulated translational errors on a data set obtained from 680 patients treated at 2 radiation therapy clinics between 2017 and 2022. A receiver operating characteristic analysis was performed to obtain target thresholds. DICOM Query and Retrieval software was integrated with the EDA to interact with the clinical database and fully automate data retrieval and analysis during a retrospective error search from 2016 to 2017 and from 2021 to 2022 for the 2 institutions, respectively. Registrations were flagged for human review using both a hard-thresholding method and a prediction trending analysis over each individual patient's treatment course. Flagged registrations were manually reviewed and categorized as errors (>1 cm misalignment at the target) or nonerrors. RESULTS: A total of 17,612 registrations were analyzed by the EDA, resulting in 7.7% flagged events. Three previously reported errors were successfully flagged by the EDA, and 4 previously unreported vertebral body misalignment errors were discovered during case reviews. False positive cases often displayed substantial image artifacts, patient rotation, and soft tissue anatomy changes. CONCLUSIONS: Our results validated the clinical utility of the EDA for bulk image reviews and highlighted the reliability and safety of IGRT, with an absolute gross patient misalignment error rate of 0.04% ± 0.02% per delivered fraction.

3.
J Appl Clin Med Phys ; 24(9): e14016, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37165761

ABSTRACT

PURPOSE: Automation and computer assistance can support quality assurance tasks in radiotherapy. Retrospective image review requires significant human resources, and automation of image review remains a noteworthy missing element in previous work. Here, we present initial findings from a proof-of-concept clinical implementation of an AI-assisted review of CBCT registrations used for patient setup. METHODS: An automated pipeline was developed and executed nightly, utilizing python scripts to interact with the clinical database through DICOM networking protocol and automate data retrieval and analysis. A previously developed artificial intelligence (AI) algorithm scored CBCT setup registrations based on misalignment likelihood, using a scale from 0 (most unlikely) through 1 (most likely). Over a 45-day period, 1357 pre-treatment CBCT registrations from 197 patients were retrieved and analyzed by the pipeline. Daily summary reports of the previous day's registrations were produced. Initial action levels targeted 10% of cases to highlight for in-depth physics review. A validation subset of 100 cases was scored by three independent observers to characterize AI-model performance. RESULTS: Following an ROC analysis, a global threshold for model predictions of 0.87 was determined, with a sensitivity of 100% and specificity of 82%. Inspecting the observer scores for the stratified validation dataset showed a statistically significant correlation between observer scores and model predictions. CONCLUSION: In this work, we describe the implementation of an automated AI-analysis pipeline for daily quantitative analysis of CBCT-guided patient setup registrations. The AI-model was validated against independent expert observers, and appropriate action levels were determined to minimize false positives without sacrificing sensitivity. Case studies demonstrate the potential benefits of such a pipeline to bolster quality and safety programs in radiotherapy. To the authors' knowledge, there are no previous works performing AI-assisted assessment of pre-treatment CBCT-based patient alignment.


Subject(s)
Radiotherapy, Image-Guided , Spiral Cone-Beam Computed Tomography , Humans , Radiotherapy Planning, Computer-Assisted/methods , Artificial Intelligence , Cone-Beam Computed Tomography/methods , Retrospective Studies , Radiotherapy, Image-Guided/methods
4.
Phys Imaging Radiat Oncol ; 25: 100427, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36937493

ABSTRACT

Background and purpose: Currently, there is no robust indicator within the Cone-Beam Computed Tomography (CBCT) DICOM headers as to which anatomical region is present on the scan. This can be a predicament to CBCT-based algorithms trained on specific body regions, such as auto-segmentation and radiomics tools used in the radiotherapy workflow. We propose an anatomical region labeling (ARL) algorithm to classify CBCT scans into four distinct regions: head & neck, thoracic-abdominal, pelvis, and extremity. Materials and methods: Algorithm training and testing was performed on 3,802 CBCT scans from 596 patients treated at our radiotherapy center. The ARL model, which consists of a convolutional neural network, makes use of a single CBCT coronal slice to output a probability of occurrence for each of the four classes. ARL was evaluated on the test dataset composed of 1,090 scans and compared to a support vector machine (SVM) model. ARL was also used to label CBCT treatment scans for 22 consecutive days as part of a proof-of-concept implementation. A validation study was performed on the first 100 unique patient scans to evaluate the functionality of the tool in the clinical setting. Results: ARL achieved an overall accuracy of 99.2% on the test dataset, outperforming the SVM (91.5% accuracy). Our validation study has shown strong agreement between the human annotations and ARL predictions, with accuracies of 99.0% for all four regions. Conclusion: The high classification accuracy demonstrated by ARL suggests that it may be employed as a pre-processing step for site-specific, CBCT-based radiotherapy tools.

5.
Med Phys ; 49(10): 6410-6423, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35962982

ABSTRACT

BACKGROUND: In cone-beam computed tomography (CBCT)-guided radiotherapy, off-by-one vertebral-body misalignments are rare but serious errors that lead to wrong-site treatments. PURPOSE: An automatic error detection algorithm was developed that uses a three-branch convolutional neural network error detection model (EDM) to detect off-by-one vertebral-body misalignments using planning computed tomography (CT) images and setup CBCT images. METHODS: Algorithm training and test data consisted of planning CTs and CBCTs from 480 patients undergoing radiotherapy treatment in the thoracic and abdominal regions at two radiotherapy clinics. The clinically applied registration was used to derive true-negative (no error) data. The setup and planning images were then misaligned by one vertebral-body in both the superior and inferior directions, simulating the most likely misalignment scenarios. For each of the aligned and misaligned 3D image pairs, 2D slice pairs were automatically extracted in each anatomical plane about a point within the vertebral column. The three slice pairs obtained were then inputted to the EDM that returned a probability of vertebral misalignment. One model (EDM1 ) was trained solely on data from institution 1. EDM1 was further trained using a lower learning rate on a dataset from institution 2 to produce a fine-tuned model, EDM2 . Another model, EDM3 , was trained from scratch using a training dataset composed of data from both institutions. These three models were validated on a randomly selected and unseen dataset composed of images from both institutions, for a total of 303 image pairs. The model performances were quantified using a receiver operating characteristic analysis. Due to the rarity of vertebral-body misalignments in the clinic, a minimum threshold value yielding a specificity of at least 99% was selected. Using this threshold, the sensitivity was calculated for each model, on each institution's test set separately. RESULTS: When applied to the combined test set, EDM1 , EDM2 , and EDM3 resulted in an area under curve of 99.5%, 99.4%, and 99.5%, respectively. EDM1 achieved a sensitivity of 96% and 88% on Institution 1 and Institution 2 test set, respectively. EDM2 obtained a sensitivity of 95% on each institution's test set. EDM3 achieved a sensitivity of 95% and 88% on Institution 1 and Institution 2 test set, respectively. CONCLUSION: The proposed algorithm demonstrated accuracy in identifying off-by-one vertebral-body misalignments in CBCT-guided radiotherapy that was sufficiently high to allow for practical implementation. It was found that fine-tuning the model on a multi-facility dataset can further enhance the generalizability of the algorithm.


Subject(s)
Cone-Beam Computed Tomography , Radiotherapy, Image-Guided , Algorithms , Cone-Beam Computed Tomography/methods , Humans , Neural Networks, Computer , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods
6.
Med Phys ; 49(1): 41-51, 2022 Jan.
Article in English | MEDLINE | ID: mdl-34783027

ABSTRACT

PURPOSE: Accurate and robust auto-segmentation of highly deformable organs (HDOs), for example, stomach or bowel, remains an outstanding problem due to these organs' frequent and large anatomical variations. Yet, time-consuming manual segmentation of these organs presents a particular challenge to time-limited modern radiotherapy techniques such as on-line adaptive radiotherapy and high-dose-rate brachytherapy. We propose a machine-assisted interpolation (MAI) that uses prior information in the form of sparse manual delineations to facilitate rapid, accurate segmentation of the stomach from low field magnetic resonance images (MRI) and the bowel from computed tomography (CT) images. METHODS: Stomach MR images from 116 patients undergoing 0.35T MRI-guided abdominal radiotherapy and bowel CT images from 120 patients undergoing high dose rate pelvic brachytherapy treatment were collected. For each patient volume, the manual delineation of the HDO was extracted from every 8th slice. These manually drawn contours were first interpolated to obtain an initial estimate of the HDO contour. A two-channel 64 × 64 pixel patch-based convolutional neural network (CNN) was trained to localize the position of the organ's boundary on each slice within a five-pixel wide road using the image and interpolated contour estimate. This boundary prediction was then input, in conjunction with the image, to an organ closing CNN which output the final organ segmentation. A Dense-UNet architecture was used for both networks. The MAI algorithm was separately trained for the stomach segmentation and the bowel segmentation. Algorithm performance was compared against linear interpolation (LI) alone and against fully automated segmentation (FAS) using a Dense-UNet trained on the same datasets. The Dice Similarity Coefficient (DSC) and mean surface distance (MSD) metrics were used to compare the predictions from the three methods. Statistically significance was tested using Student's t test. RESULTS: For the stomach segmentation, the mean DSC from MAI (0.91 ± 0.02) was 5.0% and 10.0% higher as compared to LI and FAS, respectively. The average MSD from MAI (0.77 ± 0.25 mm) was 0.54 and 3.19 mm lower compared to the two other methods. Only 7% of MAI stomach predictions resulted in a DSC < 0.8, as compared to 30% and 28% for LI and FAS, respectively. For the bowel segmentation, the mean DSC of MAI (0.90 ± 0.04) was 6% and 18% higher, and the average MSD of MAI (0.93 ± 0.48 mm) was 0.42 and 4.9 mm lower as compared to LI and FAS. Sixteen percent of the predicted contour from MAI resulted in a DSC < 0.8, as compared to 46% and 60% for FAS and LI, respectively. All comparisons between MAI and the baseline methods were found to be statistically significant (p-value < 0.001). CONCLUSIONS: The proposed MAI algorithm significantly outperformed LI in terms of accuracy and robustness for both stomach segmentation from low-field MRIs and bowel segmentation from CT images. At this time, FAS methods for HDOs still require significant manual editing. Therefore, we believe that the MAI algorithm has the potential to expedite the process of HDO delineation within the radiation therapy workflow.


Subject(s)
Image Processing, Computer-Assisted , Radiotherapy, Image-Guided , Humans , Magnetic Resonance Imaging , Neural Networks, Computer , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...