Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50
Filter
1.
Biomed Res Int ; 2024: 3573796, 2024.
Article in English | MEDLINE | ID: mdl-39263420

ABSTRACT

Background: The precision of postoperative prostate cancer radiotherapy is significantly influenced by setup errors and alterations in bladder morphology. Utilizing daily cone beam computed tomography (CBCT) imaging allows for the correction of setup errors. However, this naturally leads to the question of the issue of peripheral dose and workload. Thus, a zero-dose, noninvasive technique to reproduce the bladder volume and improve patient setup accuracy was needed. Purpose: The aim of this study is to investigate if the setup method by combining Optical Surface Management System (OSMS) and BladderScan can improve the accuracy of setup and accurately reproduce the bladder volume during radiotherapy of postoperative prostate cancer and to guide CTV-PTV margins for clinic. Method: The experimental group consisted of 15 postoperative prostate cancer patients who utilized a setup method that combined OSMS and BladderScan. This group recorded 103 setup errors, verified by CBCT. The control group comprised 25 patients, among whom 114 setup errors were recorded using the conventional setup method involving skin markers; additionally, patients in this group also exhibited spontaneous urinary suppression. The errors including lateral (Lat), longitudinal (Lng), vertical directions (Vrt), Pitch, Yaw, and Roll were analyzed between the two methods. The Dice similarity coefficient (DSC) and volume differences of the bladder between CBCT and planning CT were compared as the bladder concordance indicators. Results: The errors in the experimental group at Vrt, Lat, and Lng were 0.17 ± 0.12, 0.22 ± 0.17, and 0.18 ± 0.12 cm, and the control group were 0.25 ± 0.15, 0.31 ± 0.21, 0.34 ± 0.22 cm. The rotation errors of Pitch, Roll, and Yaw in the experimental group were 0.18 ± 0.12°, 0.11 ± 0.1°, and 0.18 ± 0.13°, and in the control group, they were 0.96 ± 0.89°, 1.01 ± 0.86°, and 1.02 ± 0.84°. The DSC and volume differences were 92.52 ± 1.65% and 39.99 ± 28.75 cm3 in the patients with BladderScan, and in the control group, they were 62.98 ± 22.33%, 273.89 ± 190.62 cm3. The P < 0.01 of the above performance indicators indicates that the difference is statistically significant. Conclusion: The accuracy of the setup method by combining OSMS and BladderScan was validated by CBCT in our study. The method in our study can improve the setup accuracy during radiotherapy of postoperative prostate cancer compared to the conventional setup method.


Subject(s)
Cone-Beam Computed Tomography , Prostatic Neoplasms , Urinary Bladder , Humans , Male , Prostatic Neoplasms/radiotherapy , Prostatic Neoplasms/surgery , Prostatic Neoplasms/diagnostic imaging , Cone-Beam Computed Tomography/methods , Urinary Bladder/diagnostic imaging , Urinary Bladder/radiation effects , Aged , Radiotherapy Planning, Computer-Assisted/methods , Postoperative Period , Middle Aged , Radiotherapy, Image-Guided/methods
2.
J Appl Clin Med Phys ; 25(8): e14442, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38922790

ABSTRACT

PURPOSE: To propose radiomics features as a superior measure for evaluating the segmentation ability of physicians and auto-segmentation tools and to compare its performance with the most commonly used metrics: Dice similarity coefficient (DSC), surface Dice similarity coefficient (sDSC), and Hausdorff distance (HD). MATERIALS/METHODS: The data of 10 lung cancer patients' CT images with nine tumor segmentations per tumor were downloaded from the RIDER (Reference Database to Evaluate Response) database. Radiomics features of 90 segmented tumors were extracted using the PyRadiomics program. The intraclass correlation coefficient (ICC) of radiomics features were used to evaluate the segmentation similarity and compare their performance with DSC, sDSC, and HD. We calculated one ICC per radiomics feature and per tumor for nine segmentations and 36 ICCs per radiomics feature for 36 pairs of nine segmentations. Meanwhile, there were 360 DSC, sDSC, and HD values calculated for 36 pairs for 10 tumors. RESULTS: The ICC of radiomics features exhibited greater sensitivity to segmentation changes than DSC and sDSC. The ICCs of the wavelet-LLL first order Maximum, wavelet-LLL glcm MCC, wavelet-LLL glcm Cluster Shade features ranged from 0.130 to 0.997, 0.033 to 0.978, and 0.160 to 0.998, respectively. On the other hand, all DSC and sDSC were larger than 0.778 and 0.700, respectively, while HD varied from 0 to 1.9 mm. The results indicated that the radiomics features could capture subtle variations in tumor segmentation characteristics, which could not be easily detected by DSC and sDSC. CONCLUSIONS: This study demonstrates the superiority of radiomics features with ICC as a measure for evaluating a physician's tumor segmentation ability and the performance of auto-segmentation tools. Radiomics features offer a more sensitive and comprehensive evaluation, providing valuable insights into tumor characteristics. Therefore, the new metrics can be used to evaluate new auto-segmentation methods and enhance trainees' segmentation skills in medical training and education.


Subject(s)
Image Processing, Computer-Assisted , Lung Neoplasms , Radiomics , Tomography, X-Ray Computed , Humans , Algorithms , Databases, Factual , Image Processing, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Lung Neoplasms/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Tomography, X-Ray Computed/methods
3.
Phys Med Biol ; 69(12)2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38821109

ABSTRACT

Objective.The validation of deformable image registration (DIR) for contour propagation is often done using contour-based metrics. Meanwhile, dose accumulation requires evaluation of voxel mapping accuracy, which might not be accurately represented by contour-based metrics. By fabricating a deformable anthropomorphic pelvis phantom, we aim to (1) quantify the voxel mapping accuracy for various deformation scenarios, in high- and low-contrast regions, and (2) identify any correlation between dice similarity coefficient (DSC), a commonly used contour-based metric, and the voxel mapping accuracy for each organ.Approach. Four organs, i.e. pelvic bone, prostate, bladder and rectum (PBR), were 3D printed using PLA and a Polyjet digital material, and assembled. The latter three were implanted with glass bead and CT markers within or on their surfaces. Four deformation scenarios were simulated by varying the bladder and rectum volumes. For each scenario, nine DIRs with different parameters were performed on RayStation v10B. The voxel mapping accuracy was quantified by finding the discrepancy between true and mapped marker positions, termed the target registration error (TRE). Pearson correlation test was done between the DSC and mean TRE for each organ.Main results. For the first time, we fabricated a deformable phantom purely from 3D printing, which successfully reproduced realistic anatomical deformations. Overall, the voxel mapping accuracy dropped with increasing deformation magnitude, but improved when more organs were used to guide the DIR or limit the registration region. DSC was found to be a good indicator of voxel mapping accuracy for prostate and rectum, but a comparatively poorer one for bladder. DSC > 0.85/0.90 was established as the threshold of mean TRE ⩽ 0.3 cm for rectum/prostate. For bladder, extra metrics in addition to DSC should be considered.Significance. This work presented a 3D printed phantom, which enabled quantification of voxel mapping accuracy and evaluation of correlation between DSC and voxel mapping accuracy.


Subject(s)
Pelvis , Phantoms, Imaging , Humans , Pelvis/diagnostic imaging , Radiation Dosage , Image Processing, Computer-Assisted/methods , Tomography, X-Ray Computed , Male , Printing, Three-Dimensional
4.
Br J Radiol ; 97(1159): 1268-1277, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38730541

ABSTRACT

OBJECTIVES: To develop an artificial intelligence (AI) tool with automated pancreas segmentation and measurement of pancreatic morphological information on CT images to assist improved and faster diagnosis in acute pancreatitis. METHODS: This study retrospectively contained 1124 patients suspected for AP and received non-contrast and enhanced abdominal CT examination between September 2013 and September 2022. Patients were divided into training (N = 688), validation (N = 145), testing dataset [N = 291; N = 104 for normal pancreas, N = 98 for AP, N = 89 for AP complicated with PDAC (AP&PDAC)]. A model based on convolutional neural network (MSAnet) was developed. The pancreas segmentation and measurement were performed via eight open-source models and MSAnet based tools, and the efficacy was evaluated using dice similarity coefficient (DSC) and intersection over union (IoU). The DSC and IoU for patients with different ages were also compared. The outline of tumour and oedema in the AP and were segmented by clustering. The diagnostic efficacy for radiologists with or without the assistance of MSAnet tool in AP and AP&PDAC was evaluated using receiver operation curve and confusion matrix. RESULTS: Among all models, MSAnet based tool showed best performance on the training and validation dataset, and had high efficacy on testing dataset. The performance was age-affected. With assistance of the AI tool, the diagnosis time was significantly shortened by 26.8% and 32.7% for junior and senior radiologists, respectively. The area under curve (AUC) in diagnosis of AP was improved from 0.91 to 0.96 for junior radiologist and 0.98 to 0.99 for senior radiologist. In AP&PDAC diagnosis, AUC was increased from 0.85 to 0.92 for junior and 0.97 to 0.99 for senior. CONCLUSION: MSAnet based tools showed good pancreas segmentation and measurement performance, which help radiologists improve diagnosis efficacy and workflow in both AP and AP with PDAC conditions. ADVANCES IN KNOWLEDGE: This study developed an AI tool with automated pancreas segmentation and measurement and provided evidence for AI tool assistance in improving the workflow and accuracy of AP diagnosis.


Subject(s)
Artificial Intelligence , Pancreatitis , Tomography, X-Ray Computed , Humans , Pancreatitis/diagnostic imaging , Retrospective Studies , Tomography, X-Ray Computed/methods , Female , Middle Aged , Male , Adult , Aged , Acute Disease , Neural Networks, Computer , Pancreas/diagnostic imaging , Pancreatic Neoplasms/diagnostic imaging , Aged, 80 and over , Young Adult
5.
Thorac Cancer ; 15(17): 1333-1342, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38686543

ABSTRACT

BACKGROUND: The aim of the study was to establish a weighted comprehensive evaluation model (WCEM) of image registration for cone-beam computed tomography (CBCT) guided lung cancer radiotherapy that considers the geometric accuracy of gross target volume (GTV) and organs at risk (OARs), and assess the registration accuracy of different image registration methods to provide clinical references. METHODS: The planning CT and CBCT images of 20 lung cancer patients were registered using diverse algorithms (bony and grayscale) and regions of interest (target, ipsilateral, and body). We compared the coverage ratio (CR) of the planning target volume (PTVCT) to GTVCBCT, as well as the dice similarity coefficient (DSC) of the GTV and OARs, considering the treatment position across various registration methods. Furthermore, we developed a mathematical model to assess registration results comprehensively. This model was evaluated and validated using CRFs across four automatic registration methods. RESULTS: The grayscale registration method, coupled with the registration of the ipsilateral structure, exhibited the highest level of automatic registration accuracy, the DSC were 0.87 ± 0.09 (GTV), 0.71 ± 0.09 (esophagus), 0.74 ± 0.09 (spinal cord), and 0.91 ± 0.05 (heart), respectively. Our proposed WCEM proved to be both practical and effective. The results clearly indicated that the grayscale registration method, when applied to the ipsilateral structure, achieved the highest CRF score. The average CRF scores, excellent rates, good rate and qualification rates were 58 ± 26, 40%, 75%, and 85%, respectively. CONCLUSIONS: This study successfully developed a clinically relevant weighted evaluation model for CBCT-guided lung cancer radiotherapy. Validation confirmed the grayscale method's optimal performance in ipsilateral structure registration.


Subject(s)
Cone-Beam Computed Tomography , Lung Neoplasms , Radiotherapy Planning, Computer-Assisted , Radiotherapy, Image-Guided , Humans , Cone-Beam Computed Tomography/methods , Lung Neoplasms/radiotherapy , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Algorithms , Male , Female , Organs at Risk
6.
Sensors (Basel) ; 24(3)2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38339612

ABSTRACT

Addressing conventional neurosurgical navigation systems' high costs and complexity, this study explores the feasibility and accuracy of a simplified, cost-effective mixed reality navigation (MRN) system based on a laser crosshair simulator (LCS). A new automatic registration method was developed, featuring coplanar laser emitters and a recognizable target pattern. The workflow was integrated into Microsoft's HoloLens-2 for practical application. The study assessed the system's precision by utilizing life-sized 3D-printed head phantoms based on computed tomography (CT) or magnetic resonance imaging (MRI) data from 19 patients (female/male: 7/12, average age: 54.4 ± 18.5 years) with intracranial lesions. Six to seven CT/MRI-visible scalp markers were used as reference points per case. The LCS-MRN's accuracy was evaluated through landmark-based and lesion-based analyses, using metrics such as target registration error (TRE) and Dice similarity coefficient (DSC). The system demonstrated immersive capabilities for observing intracranial structures across all cases. Analysis of 124 landmarks showed a TRE of 3.0 ± 0.5 mm, consistent across various surgical positions. The DSC of 0.83 ± 0.12 correlated significantly with lesion volume (Spearman rho = 0.813, p < 0.001). Therefore, the LCS-MRN system is a viable tool for neurosurgical planning, highlighting its low user dependency, cost-efficiency, and accuracy, with prospects for future clinical application enhancements.


Subject(s)
Augmented Reality , Surgery, Computer-Assisted , Humans , Male , Female , Adult , Middle Aged , Aged , Neuronavigation/methods , Feasibility Studies , Tomography, X-Ray Computed , Lasers , Surgery, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
7.
Article in Chinese | WPRIM (Western Pacific) | ID: wpr-1026221

ABSTRACT

Objective To assess inter-observer variations(IOV)in the delineation of target volumes and organs-at-risk(OAR)for intensity-modulated radiotherapy(IMRT)of nasopharyngeal carcinoma(NPC)among physicians from different levels of cancer centers,thereby providing a reference for quality control in multi-center clinical trials.Methods Twelve patients with NPC of different TMN stages were randomly selected.Three physicians from the same municipal cancer center manually delineated the target volume(GTVnx)and OAR for each patient.The manually modified and confirmed target volume(GTVnx)and OAR delineation structures by radiotherapy experts from the regional cancer center were used as the standard delineation.The absolute volume difference ratio(△V_diff),maximum/minimum volume ratio(MMR),coefficient of variation(CV),and Dice similarity coefficient(DSC)were used to compare the differences in organ delineation among physicians from different levels of cancer centers and among the 3 physicians from the same municipal cancer center.Furthermore,the IOV of GTVnx and OAR among physicians from different levels cancer centers were compared across different TMN stages.Results Significant differences in the delineation of GTVnx were observed among physicians from different levels of cancer centers.Among the 3 physicians,the maximum values of △V_diff,MMR,and CV were 97.23%±83.45%,2.19±0.75,and 0.31±0.14,respectively,with an average DSC of less than 0.7.Additionally,there were considerable differences in the delineation of small-volume OAR such as the left and right optic nerves,chiasm,and pituitary,with average MMR>2.8,CV>0.37,and DSC<0.51.However,relatively smaller differences were observed in the delineation of large-volume OAR such as the brainstem,spinal cord,left and right eyeballs,and left and right mandible,with average△V_diff<42%,MMR<1.55,and DSC>0.7.Compared with the differences among physicians from different levels cancer centers,the differences among the 3 physicians from the municipal cancer center were slightly reduced.Furthermore,there were also differences in the delineation of target volumes for NPC among physicians from different levels cancer centers,depending on the staging of the disease.Compared with the delineation of target volumes for earlier stage patients(stages I or II),the differences among physicians in the delineation of target volumes for advanced stage patients(stages III or IV)were smaller,with average △V_diff and DSC of 98.31%±67.36%vs 69.38%±72.61%(P<0.05)and 0.55±0.08 vs 0.72±0.12(P<0.05),respectively.Conclusion There are differences in the delineation of GTVnx and OAR in radiation therapy for NPC among physicians from different levels of cancer centers,especially in the delineation of target volume(GTVnx)and small-volume OAR for early-stage patients.To ensure the accuracy of multicenter clinical trials,it is recommended to provide unified training to physicians from different levels of cancer centers and review their delineation results to reduce the effect of differences on treatment outcomes.

8.
Diagnostics (Basel) ; 13(15)2023 Jul 31.
Article in English | MEDLINE | ID: mdl-37568900

ABSTRACT

Intracranial hemorrhage (ICH) occurs when blood leaks inside the skull as a result of trauma to the skull or due to medical conditions. ICH usually requires immediate medical and surgical attention because the disease has a high mortality rate, long-term disability potential, and other potentially life-threatening complications. There are a wide range of severity levels, sizes, and morphologies of ICHs, making accurate identification challenging. Hemorrhages that are small are more likely to be missed, particularly in healthcare systems that experience high turnover when it comes to computed tomography (CT) investigations. Although many neuroimaging modalities have been developed, CT remains the standard for diagnosing trauma and hemorrhage (including non-traumatic ones). A CT scan-based diagnosis can provide time-critical, urgent ICH surgery that could save lives because CT scan-based diagnoses can be obtained rapidly. The purpose of this study is to develop a machine-learning algorithm that can detect intracranial hemorrhage based on plain CT images taken from 75 patients. CT images were preprocessed using brain windowing, skull-stripping, and image inversion techniques. Hemorrhage segmentation was performed using multiple pre-trained models on preprocessed CT images. A U-Net model with DenseNet201 pre-trained encoder outperformed other U-Net, U-Net++, and FPN (Feature Pyramid Network) models with the highest Dice similarity coefficient (DSC) and intersection over union (IoU) scores, which were previously used in many other medical applications. We presented a three-dimensional brain model highlighting hemorrhages from ground truth and predicted masks. The volume of hemorrhage was measured volumetrically to determine the size of the hematoma. This study is essential in examining ICH for diagnostic purposes in clinical practice by comparing the predicted 3D model with the ground truth.

9.
Clin Transl Radiat Oncol ; 39: 100590, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36935854

ABSTRACT

Head and neck radiotherapy induces important toxicity, and its efficacy and tolerance vary widely across patients. Advancements in radiotherapy delivery techniques, along with the increased quality and frequency of image guidance, offer a unique opportunity to individualize radiotherapy based on imaging biomarkers, with the aim of improving radiation efficacy while reducing its toxicity. Various artificial intelligence models integrating clinical data and radiomics have shown encouraging results for toxicity and cancer control outcomes prediction in head and neck cancer radiotherapy. Clinical implementation of these models could lead to individualized risk-based therapeutic decision making, but the reliability of the current studies is limited. Understanding, validating and expanding these models to larger multi-institutional data sets and testing them in the context of clinical trials is needed to ensure safe clinical implementation. This review summarizes the current state of the art of machine learning models for prediction of head and neck cancer radiotherapy outcomes.

10.
Eur J Neurosci ; 57(1): 78-90, 2023 01.
Article in English | MEDLINE | ID: mdl-36382406

ABSTRACT

Measuring brain activity during functional MRI (fMRI) tasks is one of the main tools to identify brain biomarkers of disease or neural substrates associated with specific symptoms. However, identifying correct biomarkers relies on reliable measures. Recently, poor reliability was reported for task-based fMRI measures. The present study aimed to demonstrate the reliability of a finger-tapping fMRI task across two sessions in healthy participants. Thirty-one right-handed healthy participants aged 18-60 years took part in two MRI sessions 3 weeks apart during which we acquired finger-tapping task-fMRI. We examined the overlap of activations between sessions using Dice similarity coefficients, assessing their location and extent. Then, we compared amplitudes calculating intraclass correlation coefficients (ICCs) in three sets of regions of interest (ROIs) in the motor network: literature-based ROIs (10-mm-radius spheres centred on peaks of an activation likelihood estimation), anatomical ROIs (regions as defined in an atlas) and ROIs based on conjunction analyses (superthreshold voxels in both sessions). Finger tapping consistently activated expected regions, for example, left primary sensorimotor cortices, premotor area and right cerebellum. We found good-to-excellent overlap of activations for most contrasts (Dice coefficients: .54-.82). Across time, ICCs showed large variability in all ROI sets (.04-.91). However, ICCs in most ROIs indicated fair-to-good reliability (mean = .52). The least specific contrast consistently yielded the best reliability. Overall, the finger-tapping task showed good spatial overlap and fair reliability of amplitudes on group level. Although caution is warranted in interpreting correlations of activations with other variables, identification of activated regions in response to a task and their between-group comparisons are still valid and important modes of analysis in neuroimaging to find population tendencies and differences.


Subject(s)
Magnetic Resonance Imaging , Sensorimotor Cortex , Humans , Magnetic Resonance Imaging/methods , Reproducibility of Results , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods , Hand
11.
Diagnostics (Basel) ; 12(12)2022 Dec 06.
Article in English | MEDLINE | ID: mdl-36553071

ABSTRACT

In biomedical image analysis, information about the location and appearance of tumors and lesions is indispensable to aid doctors in treating and identifying the severity of diseases. Therefore, it is essential to segment the tumors and lesions. MRI, CT, PET, ultrasound, and X-ray are the different imaging systems to obtain this information. The well-known semantic segmentation technique is used in medical image analysis to identify and label regions of images. The semantic segmentation aims to divide the images into regions with comparable characteristics, including intensity, homogeneity, and texture. UNET is the deep learning network that segments the critical features. However, UNETs basic architecture cannot accurately segment complex MRI images. This review introduces the modified and improved models of UNET suitable for increasing segmentation accuracy.

12.
Phys Imaging Radiat Oncol ; 24: 152-158, 2022 Oct.
Article in English | MEDLINE | ID: mdl-36424980

ABSTRACT

Background and Purpose: A wide range of quantitative measures are available to facilitate clinical implementation of auto-contouring software, on-going Quality Assurance (QA) and interobserver contouring variation studies. This study aimed to assess the variation in output when applying different implementations of the measures to the same data in order to investigate how consistently such measures are defined and implemented in radiation oncology. Materials and Methods: A survey was conducted to assess if there were any differences in definitions of contouring measures or their implementations that would lead to variation in reported results between institutions. This took two forms: a set of computed tomography (CT) image data with "Test" and "Reference" contours was distributed for participants to process using their preferred tools and report results, and a questionnaire regarding the definition of measures and their implementation was completed by the participants. Results: Thirteen participants completed the survey and submitted results, with one commercial and twelve in-house solutions represented. Excluding outliers, variations of up to 50% in Dice Similarity Coefficient (DSC), 50% in 3D Hausdorff Distance (HD), and 200% in Average Distance (AD) were observed between the participant submitted results. Collaborative investigation with participants revealed a large number of bugs in implementation, confounding the understanding of intentional implementation choices. Conclusion: Care must be taken when comparing quantitative results between different studies. There is a need for a dataset with clearly defined measures and ground truth for validation of such tools prior to their use.

13.
Phys Eng Sci Med ; 45(3): 847-858, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35737221

ABSTRACT

The fundus imaging method of eye screening detects eye diseases by segmenting the optic disc (OD) and optic cup (OC). OD and OC are still challenging to segment accurately. This work proposes three-layer graph-based deep architecture with an enhanced fusion method for OD and OC segmentation. CNN encoder-decoder architecture, extended graph network, and approximation via fusion-based rule are explored for connecting local and global information. A graph-based model is developed for combining local and overall knowledge. By extending feature masking, regularization of repetitive features with fusion for combining channels has been done. The performance of the proposed network is evaluated through the analysis of different metric parameters such as dice similarity coefficient (DSC), intersection of union (IOU), accuracy, specificity, sensitivity. Experimental verification of this methodology has been done using the four benchmarks publicly available datasets DRISHTI-GS, RIM-ONE for OD, and OC segmentation. In addition, DRIONS-DB and HRF fundus imaging datasets were analyzed for optimizing the model's performance based on OD segmentation. DSC metric of methodology achieved 0.97 and 0.96 for DRISHTI-GS and RIM-ONE, respectively. Similarly, IOU measures for DRISHTI-GS and RIM-ONE datasets were 0.96 and 0.93, respectively, for OD measurement. For OC segmentation, DSC and IOU were measured as 0.93 and 0.90 respectively for DRISHTI-GS and 0.83 and 0.82 for RIM-ONE data. The proposed technique improved value of metrics with most of the existing methods in terms of DSC and IOU of the results metric of the experiments for OD and OC segmentation.


Subject(s)
Glaucoma , Optic Disk , Diagnostic Imaging , Fundus Oculi , Glaucoma/diagnostic imaging , Humans , Optic Disk/diagnostic imaging , Retina
14.
Front Public Health ; 10: 813135, 2022.
Article in English | MEDLINE | ID: mdl-35493368

ABSTRACT

Objective: Precise segmentation of human organs and anatomic structures (especially organs at risk, OARs) is the basis and prerequisite for the treatment planning of radiation therapy. In order to ensure rapid and accurate design of radiotherapy treatment planning, an automatic organ segmentation technique was investigated based on deep learning convolutional neural network. Method: A deep learning convolutional neural network (CNN) algorithm called BCDU-Net has been modified and developed further by us. Twenty two thousand CT images and the corresponding organ contours of 17 types delineated manually by experienced physicians from 329 patients were used to train and validate the algorithm. The CT images randomly selected were employed to test the modified BCDU-Net algorithm. The weight parameters of the algorithm model were acquired from the training of the convolutional neural network. Result: The average Dice similarity coefficient (DSC) of the automatic segmentation and manual segmentation of the human organs of 17 types reached 0.8376, and the best coefficient reached up to 0.9676. It took 1.5-2 s and about 1 h to automatically segment the contours of an organ in an image of the CT dataset for a patient and the 17 organs for the CT dataset with the method developed by us, respectively. Conclusion: The modified deep neural network algorithm could be used to automatically segment human organs of 17 types quickly and accurately. The accuracy and speed of the method meet the requirements of its application in radiotherapy.


Subject(s)
Artificial Intelligence , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Organs at Risk , Tomography, X-Ray Computed/methods
15.
Phys Imaging Radiat Oncol ; 22: 77-84, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35602548

ABSTRACT

Background and purpose: Tumor delineation is required both for radiotherapy planning and quantitative imaging biomarker purposes. It is a manual, time- and labor-intensive process prone to inter- and intraobserver variations. Semi or fully automatic segmentation could provide better efficiency and consistency. This study aimed to investigate the influence of including and combining functional with anatomical magnetic resonance imaging (MRI) sequences on the quality of automatic segmentations. Materials and methods: T2-weighted (T2w), diffusion weighted, multi-echo T2*-weighted, and contrast enhanced dynamic multi-echo (DME) MR images of eighty-one patients with rectal cancer were used in the analysis. Four classical machine learning algorithms; adaptive boosting (ADA), linear and quadratic discriminant analysis and support vector machines, were trained for automatic segmentation of tumor and normal tissue using different combinations of the MR images as input, followed by semi-automatic morphological post-processing. Manual delineations from two experts served as ground truth. The Sørensen-Dice similarity coefficient (DICE) and mean symmetric surface distance (MSD) were used as performance metric in leave-one-out cross validation. Results: Using T2w images alone, ADA outperformed the other algorithms, yielding a median per patient DICE of 0.67 and MSD of 3.6 mm. The performance improved when functional images were added and was highest for models based on either T2w and DME images (DICE: 0.72, MSD: 2.7 mm) or all four MRI sequences (DICE: 0.72, MSD: 2.5 mm). Conclusion: Machine learning models using functional MRI, in particular DME, have the potential to improve automatic segmentation of rectal cancer relative to models using T2w MRI alone.

16.
Eur J Radiol Open ; 9: 100412, 2022.
Article in English | MEDLINE | ID: mdl-35345817

ABSTRACT

Purpose: To automatically segment and measure the levator hiatus with a deep learning approach and evaluate the performance between algorithms, sonographers, and different devices. Methods: Three deep learning models (UNet-ResNet34, HR-Net, and SegNet) were trained with 360 images and validated with 42 images. The trained models were tested with two test sets. The first set included 138 images to evaluate the performance between the algorithms and sonographers. An independent dataset including 679 images assessed the performances of algorithms between different ultrasound devices. Four metrics were used for evaluation: DSC, HDD, the relative error of segmentation area, and the absolute error of segmentation area. Results: The UNet model outperformed HR-Net and SegNet. It could achieve a mean DSC of 0.964 for the first test set and 0.952 for the independent test set. UNet was creditable compared with three senior sonographers with a noninferiority test in the first test set and equivalent in the two test sets collected by different devices. On average, it took two seconds to process one case with a GPU and 2.4 s with a CPU. Conclusions: The deep learning approach has good performance for levator hiatus segmentation and good generalization ability on independent test sets. This automatic levator hiatus segmentation approach could help shorten the clinical examination time and improve consistency.

17.
Eur Radiol ; 32(8): 5371-5381, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35201408

ABSTRACT

OBJECTIVES: To examine the role of ADC threshold on agreement across observers and deep learning models (DLMs) plus segmentation performance of DLMs for acute ischemic stroke (AIS). METHODS: Twelve DLMs, which were trained on DWI-ADC-ADC combination from 76 patients with AIS using 6 different ADC thresholds with ground truth manually contoured by 2 observers, were tested by additional 67 patients in the same hospital and another 78 patients in another hospital. Agreement between observers and DLMs were evaluated by Bland-Altman plot and intraclass correlation coefficient (ICC). The similarity between ground truth (GT) defined by observers and between automatic segmentation performed by DLMs was evaluated by Dice similarity coefficient (DSC). Group comparison was performed using the Mann-Whitney U test. The relationship between the DSC and ADC threshold as well as AIS lesion size was evaluated by linear regression analysis. A p < .05 was considered statistically significant. RESULTS: Excellent interobserver agreement and intraobserver repeatability in the manual segmentation (all ICC > 0.98, p < .001) were achieved. The 95% limit of agreement was reduced from 11.23 cm2 for GT on DWI to 0.59 cm2 for prediction at an ADC threshold of 0.6 × 10-3 mm2/s combined with DWI. The segmentation performance of DLMs was improved with an overall DSC from 0.738 ± 0.214 on DWI to 0.971 ± 0.021 on an ADC threshold of 0.6 × 10-3 mm2/s combined with DWI. CONCLUSIONS: Combining an ADC threshold of 0.6 × 10-3 mm2/s with DWI reduces interobserver and inter-DLM difference and achieves best segmentation performance of AIS lesions using DLMs. KEY POINTS: • Higher Dice similarity coefficient (DSC) in predicting acute ischemic stroke lesions was achieved by ADC thresholds combined with DWI than by DWI alone (all p < .05). • DSC had a negative association with the ADC threshold in most sizes, both hospitals, and both observers (most p < .05) and a positive association with the stroke size in all ADC thresholds, both hospitals, and both observers (all p < .001). • An ADC threshold of 0.6 × 10-3 mm2/s eliminated the difference of DSC at any stroke size between observers or between hospitals (p = .07 to > .99).


Subject(s)
Deep Learning , Ischemic Stroke , Stroke , Diffusion Magnetic Resonance Imaging , Humans , Ischemic Stroke/diagnostic imaging , Observer Variation , Stroke/diagnostic imaging
18.
Med Dosim ; 47(2): 136-141, 2022.
Article in English | MEDLINE | ID: mdl-34987001

ABSTRACT

To assess the feasibility of dynamic hybrid-phase computed tomography (CTDHP) simulation when patients undergo lung stereotactic body radiation therapy (SBRT). Eighteen non-small-cell lung-cancer patients were immobilised in a stereotactic body frame with abdominal compression. All underwent dynamic hybrid-phase CT scans that were compared with cone-beam CT (CBCT). We also determined the internal target volume (ITV) and evaluated the following four metrics: the "AND" function in the Boolean module of Eclipse, volume overlap (VO), Dice similarity coefficient (DSC), and dose-volume histogram. The average ITV values of 4DCTDHP and 3D-CBCT were respectively 12.82±10.42 and 14.6±12.18 cm3 (n=72, p<0.001), and the average ITV value of AND was 11.7±10.1 cm3. The average planning target volume (PTV) of 4DCTDHP and 3D-CBCT was 25.63±18.04 and 28.00±19.82 cm3 (n=72, p<0.001). The median AND difference between ITV and PTV was significant (p<0.01) and had a significantly linear distribution (R2=0.991 for ITV, R2=0.972 for PTV). The average VO of PTV was greater than that of ITV (0.81±0.096; 0.78±0.11). We also observed that the average DSC in PTV (0.83±0.066) was greater than that in ITV (0.81±0.084). The average results indicated that 97.9%±3.44 of ITVCBCT was covered by 95% of the prescribed dose. The average minimum, maximum and mean percentage doses of ITVCBCT were 87.9%±9.46, 107.3%±1.57, and 101.3%±1.12, respectively. This paper has demonstrated that dynamic hybrid-phase CT simulation for patients undergoing lung SBRT and also published evaluation metrics in scientific analysis. Our approach also has the advantage of adequate margin and fewer phases in CT simulation.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Radiosurgery , Carcinoma, Non-Small-Cell Lung/diagnostic imaging , Carcinoma, Non-Small-Cell Lung/radiotherapy , Cone-Beam Computed Tomography/methods , Feasibility Studies , Four-Dimensional Computed Tomography/methods , Humans , Lung , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/radiotherapy , Radiosurgery/methods , Radiotherapy Planning, Computer-Assisted/methods
19.
J Appl Clin Med Phys ; 23(3): e13540, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35084081

ABSTRACT

An in-house hybrid deformable image registration (DIR) method, which combines free-form deformation (FFD) and the viscous fluid registration method, is proposed. Its results on the planning computed tomography (CT) and the day 1 treatment cone-beam CT (CBCT) image from 68 head and neck cancer patients are compared with the results of NiftyReg, which uses B-spline FFD alone. Several similarity metrics, the target registration error (TRE) of annotated points, as well as the Dice similarity coefficient (DSC) and Hausdorff distance (HD) of the propagated organs at risk are employed to analyze their registration accuracy. According to quantitative analysis on mutual information, normalized cross-correlation, and the absolute pixel value differences, the results of the proposed DIR are more similar to the CBCT images than the NiftyReg results. Smaller TRE of the annotated points is observed in the proposed method, and the overall mean TRE for the proposed method and NiftyReg was 2.34 and 2.98 mm, respectively (p < 0.001). The mean DSC in the larynx, spinal cord, oral cavity, mandible, and parotid given by the proposed method ranged from 0.78 to 0.91, significantly higher than the NiftyReg results (ranging from 0.77 to 0.90), and the HD was significantly lower compared to NiftyReg. Furthermore, the proposed method did not suffer from unrealistic deformations as the NiftyReg did in the visual evaluation. Meanwhile, the execution time of the proposed method was much higher than NiftyReg (96.98 ± 11.88 s vs. 4.60 ± 0.49 s). In conclusion, the in-house hybrid method gave better accuracy and more stable performance than NiftyReg.


Subject(s)
Head and Neck Neoplasms , Radiotherapy, Intensity-Modulated , Spiral Cone-Beam Computed Tomography , Algorithms , Cone-Beam Computed Tomography/methods , Head and Neck Neoplasms/diagnostic imaging , Head and Neck Neoplasms/radiotherapy , Humans , Image Processing, Computer-Assisted/methods , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Intensity-Modulated/methods , Tomography, X-Ray Computed/methods
20.
Res Diagn Interv Imaging ; 1: 100003, 2022 Mar.
Article in English | MEDLINE | ID: mdl-37520010

ABSTRACT

Objectives: 1) To develop a deep learning (DL) pipeline allowing quantification of COVID-19 pulmonary lesions on low-dose computed tomography (LDCT). 2) To assess the prognostic value of DL-driven lesion quantification. Methods: This monocentric retrospective study included training and test datasets taken from 144 and 30 patients, respectively. The reference was the manual segmentation of 3 labels: normal lung, ground-glass opacity(GGO) and consolidation(Cons). Model performance was evaluated with technical metrics, disease volume and extent. Intra- and interobserver agreement were recorded. The prognostic value of DL-driven disease extent was assessed in 1621 distinct patients using C-statistics. The end point was a combined outcome defined as death, hospitalization>10 days, intensive care unit hospitalization or oxygen therapy. Results: The Dice coefficients for lesion (GGO+Cons) segmentations were 0.75±0.08, exceeding the values for human interobserver (0.70±0.08; 0.70±0.10) and intraobserver measures (0.72±0.09). DL-driven lesion quantification had a stronger correlation with the reference than inter- or intraobserver measures. After stepwise selection and adjustment for clinical characteristics, quantification significantly increased the prognostic accuracy of the model (0.82 vs. 0.90; p<0.0001). Conclusions: A DL-driven model can provide reproducible and accurate segmentation of COVID-19 lesions on LDCT. Automatic lesion quantification has independent prognostic value for the identification of high-risk patients.

SELECTION OF CITATIONS
SEARCH DETAIL