Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Med Phys ; 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38980065

ABSTRACT

BACKGROUND: Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE: In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS: Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS: The patient-specific model achieved an average RMSE of 0.014 ( p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 ( p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 ( p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 ( p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [ p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [ p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3  and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS: Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.

2.
Phys Med Biol ; 69(8)2024 Apr 03.
Article in English | MEDLINE | ID: mdl-38471184

ABSTRACT

Objective. Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which impairs its accuracy for dose verification. In this study, we developed a hybrid-supervised deep learning method for protoacoustic imaging to address the limited view issue.Approach. We proposed a Recon-Enhance two-stage deep learning method. In the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from raw acoustic signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification.Main results. The results evaluated on a dataset of 126 prostate cancer patients achieved an average root mean squared errors (RMSE) of 0.0292, and an average structural similarity index measure (SSIM) of 0.9618, out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 s, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.Significance. Our study achieved start-of-the-art performance in the challenging task of direct reconstruction from radiofrequency signals, demonstrating the great promise of PA imaging as a highly efficient and accurate tool forinvivo3D proton dose verification to minimize the range uncertainties of proton therapy to improve its precision and outcomes.


Subject(s)
Deep Learning , Proton Therapy , Male , Humans , Protons , Imaging, Three-Dimensional , Prostate , Image Processing, Computer-Assisted/methods
3.
ArXiv ; 2023 Aug 11.
Article in English | MEDLINE | ID: mdl-37608936

ABSTRACT

Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which significantly impairs its accuracy for dose verification. In this study, we developed a deep learning method with a Recon- Enhance two-stage strategy for protoacoustic imaging to address the limited view issue. Specifically, in the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from radiofrequency signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the Enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification. The results evaluated on a dataset of 126 prostate cancer patients achieved an average RMSE of 0.0292, and an average SSIM of 0.9618, significantly out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 seconds, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.

4.
IEEE Trans Med Imaging ; 41(10): 2856-2866, 2022 10.
Article in English | MEDLINE | ID: mdl-35544487

ABSTRACT

Cephalometric analysis relies on accurate detection of craniomaxillofacial (CMF) landmarks from cone-beam computed tomography (CBCT) images. However, due to the complexity of CMF bony structures, it is difficult to localize landmarks efficiently and accurately. In this paper, we propose a deep learning framework to tackle this challenge by jointly digitalizing 105 CMF landmarks on CBCT images. By explicitly learning the local geometrical relationships between the landmarks, our approach extends Mask R-CNN for end-to-end prediction of landmark locations. Specifically, we first apply a detection network on a down-sampled 3D image to leverage global contextual information to predict the approximate locations of the landmarks. We subsequently leverage local information provided by higher-resolution image patches to refine the landmark locations. On patients with varying non-syndromic jaw deformities, our method achieves an average detection accuracy of 1.38± 0.95mm, outperforming a related state-of-the-art method.


Subject(s)
Spiral Cone-Beam Computed Tomography , Anatomic Landmarks , Cephalometry/methods , Cone-Beam Computed Tomography/methods , Humans , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Reproducibility of Results
5.
Article in English | MEDLINE | ID: mdl-34927176

ABSTRACT

Virtual orthognathic surgical planning involves simulating surgical corrections of jaw deformities on 3D facial bony shape models. Due to the lack of necessary guidance, the planning procedure is highly experience-dependent and the planning results are often suboptimal. A reference facial bony shape model representing normal anatomies can provide an objective guidance to improve planning accuracy. Therefore, we propose a self-supervised deep framework to automatically estimate reference facial bony shape models. Our framework is an end-to-end trainable network, consisting of a simulator and a corrector. In the training stage, the simulator maps jaw deformities of a patient bone to a normal bone to generate a simulated deformed bone. The corrector then restores the simulated deformed bone back to normal. In the inference stage, the trained corrector is applied to generate a patient-specific normal-looking reference bone from a real deformed bone. The proposed framework was evaluated using a clinical dataset and compared with a state-of-the-art method that is based on a supervised point-cloud network. Experimental results show that the estimated shape models given by our approach are clinically acceptable and significantly more accurate than that of the competing method.

6.
Article in English | MEDLINE | ID: mdl-34927177

ABSTRACT

Dental landmark localization is a fundamental step to analyzing dental models in the planning of orthodontic or orthognathic surgery. However, current clinical practices require clinicians to manually digitize more than 60 landmarks on 3D dental models. Automatic methods to detect landmarks can release clinicians from the tedious labor of manual annotation and improve localization accuracy. Most existing landmark detection methods fail to capture local geometric contexts, causing large errors and misdetections. We propose an end-to-end learning framework to automatically localize 68 landmarks on high-resolution dental surfaces. Our network hierarchically extracts multi-scale local contextual features along two paths: a landmark localization path and a landmark area-of-interest segmentation path. Higher-level features are learned by combining local-to-global features from the two paths by feature fusion to predict the landmark heatmap and the landmark area segmentation map. An attention mechanism is then applied to the two maps to refine the landmark position. We evaluated our framework on a real-patient dataset consisting of 77 high-resolution dental surfaces. Our approach achieves an average localization error of 0.42 mm, significantly outperforming related start-of-the-art methods.

7.
Article in English | MEDLINE | ID: mdl-34966912

ABSTRACT

Facial appearance changes with the movements of bony segments in orthognathic surgery of patients with craniomaxillofacial (CMF) deformities. Conventional bio-mechanical methods, such as finite element modeling (FEM), for simulating such changes, are labor intensive and computationally expensive, preventing them from being used in clinical settings. To overcome these limitations, we propose a deep learning framework to predict post-operative facial changes. Specifically, FC-Net, a facial appearance change simulation network, is developed to predict the point displacement vectors associated with a facial point cloud. FC-Net learns the point displacements of a pre-operative facial point cloud from the bony movement vectors between pre-operative and simulated post-operative bony models. FC-Net is a weakly-supervised point displacement network trained using paired data with strict point-to-point correspondence. To preserve the topology of the facial model during point transform, we employ a local-point-transform loss to constrain the local movements of points. Experimental results on real patient data reveal that the proposed framework can predict post-operative facial appearance changes remarkably faster than a state-of-the-art FEM method with comparable prediction accuracy.

8.
Med Phys ; 48(12): 7735-7746, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34309844

ABSTRACT

PURPOSE: The purpose of this study was to reduce the experience dependence during the orthognathic surgical planning that involves virtually simulating the corrective procedure for jaw deformities. METHODS: We introduce a geometric deep learning framework for generating reference facial bone shape models for objective guidance in surgical planning. First, we propose a surface deformation network to warp a patient's deformed bone to a set of normal bones for generating a dictionary of patient-specific normal bony shapes. Subsequently, sparse representation learning is employed to estimate a reference shape model based on the dictionary. RESULTS: We evaluated our method on a clinical dataset containing 24 patients, and compared it with a state-of-the-art method that relies on landmark-based sparse representation. Our method yields significantly higher accuracy than the competing method for estimating normal jaws and maintains the midfaces of patients' facial bones as well as the conventional way. CONCLUSIONS: Experimental results indicate that our method generates accurate shape models that meet clinical standards.


Subject(s)
Jaw Abnormalities , Orthognathic Surgical Procedures , Humans , Imaging, Three-Dimensional , Jaw , Unsupervised Machine Learning
9.
IEEE J Biomed Health Inform ; 25(8): 2958-2966, 2021 08.
Article in English | MEDLINE | ID: mdl-33497345

ABSTRACT

Orthognathic surgical outcomes rely heavily on the quality of surgical planning. Automatic estimation of a reference facial bone shape significantly reduces experience-dependent variability and improves planning accuracy and efficiency. We propose an end-to-end deep learning framework to estimate patient-specific reference bony shape models for patients with orthognathic deformities. Specifically, we apply a point-cloud network to learn a vertex-wise deformation field from a patient's deformed bony shape, represented as a point cloud. The estimated deformation field is then used to correct the deformed bony shape to output a patient-specific reference bony surface model. To train our network effectively, we introduce a simulation strategy to synthesize deformed bones from any given normal bone, producing a relatively large and diverse dataset of shapes for training. Our method was evaluated using both synthetic and real patient data. Experimental results show that our framework estimates realistic reference bony shape models for patients with varying deformities. The performance of our method is consistently better than an existing method and several deep point-cloud networks. Our end-to-end estimation framework based on geometric deep learning shows great potential for improving clinical workflows.


Subject(s)
Deep Learning , Orthognathic Surgical Procedures , Bone and Bones , Humans
10.
Med Image Comput Comput Assist Interv ; 12264: 817-826, 2020 Oct.
Article in English | MEDLINE | ID: mdl-34927175

ABSTRACT

Landmark localization is an important step in quantifying craniomaxillofacial (CMF) deformities and designing treatment plans of reconstructive surgery. However, due to the severity of deformities and defects (partially missing anatomy), it is difficult to automatically and accurately localize a large set of landmarks simultaneously. In this work, we propose two cascaded networks for digitizing 60 anatomical CMF landmarks in cone-beam computed tomography (CBCT) images. The first network is a U-Net that outputs heatmaps for landmark locations and landmark features extracted with a local attention mechanism. The second network is a graph convolution network that takes the features extracted by the first network as input and determines whether each landmark exists via binary classification. We evaluated our approach on 50 sets of CBCT scans of patients with CMF deformities and compared them with state-of-the-art methods. The results indicate that our approach can achieve an average detection error of 1.47mm with a false positive rate of 19%, outperforming related methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...