Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Curr Med Imaging ; 2024 Sep 10.
Artículo en Inglés | MEDLINE | ID: mdl-39257152

RESUMEN

BACKGROUND: Accurately modeling respiratory motion in medical images is crucial for various applications, including radiation therapy planning. However, existing registration methods often struggle to extract local features effectively, limiting their performance. OBJECTIVE: In this paper, we aimed to propose a new framework called CvTMorph, which utilizes a Convolutional vision Transformer (CvT) and Convolutional Neural Networks (CNN) to improve local feature extraction. METHODS: CvTMorph integrates CvT and CNN to construct a hybrid model that combines the strengths of both approaches. Additionally, scaling and square layers are added to enhance the registration performance. We have evaluated the performance of CvTMorph on the 4D-Lung and DIR-Lab datasets and compared it with state-of-the-art methods to demonstrate its effectiveness. RESULTS: The experimental results have demonstrated CvTMorph to outperform the existing methods in terms of accuracy and robustness for respiratory motion modeling in 4D images. The incorporation of the convolutional vision transformer has significantly improved the registration performance and enhanced the representation of local structures. CONCLUSION: CvTMorph offers a promising solution for accurately modeling respiratory motion in 4D medical images. The hybrid model, leveraging convolutional vision transformer and convolutional neural networks, has proven effective in extracting local features and improving registration performance. The results have highlighted the potential of CvTMorph for various applications, such as radiation therapy planning, and provided a basis for further research in this field.

2.
Neural Netw ; 179: 106539, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39089149

RESUMEN

Significant progress has been achieved in multi-object tracking (MOT) through the evolution of detection and re-identification (ReID) techniques. Despite these advancements, accurately tracking objects in scenarios with homogeneous appearance and heterogeneous motion remains a challenge. This challenge arises from two main factors: the insufficient discriminability of ReID features and the predominant utilization of linear motion models in MOT. In this context, we introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor that relies solely on object trajectory information. This predictor comprehensively integrates two levels of granularity in motion features to enhance the modeling of temporal dynamics and facilitate precise future motion prediction for individual objects. Specifically, the proposed approach adopts a self-attention mechanism to capture token-level information and a Dynamic MLP layer to model channel-level features. MotionTrack is a simple, online tracking approach. Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT, characterized by highly complex object motion.


Asunto(s)
Movimiento (Física) , Humanos , Redes Neurales de la Computación , Algoritmos , Aprendizaje Automático , Percepción de Movimiento/fisiología
3.
Phys Med Biol ; 69(11)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38697195

RESUMEN

Objective. Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few x-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g. breathing).Approach. We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired x-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular x-ray projections. Specifically, PMF-STINR uses spatial implicit neural representations to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion of the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning.Main results. PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (∼ 0.1 s) resolution and sub-millimeter accuracy.Significance. PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.


Asunto(s)
Tomografía Computarizada de Haz Cónico , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada de Haz Cónico/métodos , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Fantasmas de Imagen , Aprendizaje Automático
4.
Int J Hyperthermia ; 41(1): 2321980, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38616245

RESUMEN

BACKGROUND: A method for periprocedural contrast agent-free visualization of uterine fibroid perfusion could potentially shorten magnetic resonance-guided high intensity focused ultrasound (MR-HIFU) treatment times and improve outcomes. Our goal was to test feasibility of perfusion fraction mapping by intravoxel incoherent motion (IVIM) modeling using diffusion-weighted MRI as method for visual evaluation of MR-HIFU treatment progression. METHODS: Conventional and T2-corrected IVIM-derived perfusion fraction maps were retrospectively calculated by applying two fitting methods to diffusion-weighted MRI data (b = 0, 50, 100, 200, 400, 600 and 800 s/mm2 at 1.5 T) from forty-four premenopausal women who underwent MR-HIFU ablation treatment of uterine fibroids. Contrast in perfusion fraction maps between areas with low perfusion fraction and surrounding tissue in the target uterine fibroid immediately following MR-HIFU treatment was evaluated. Additionally, the Dice similarity coefficient (DSC) was calculated between delineated areas with low IVIM-derived perfusion fraction and hypoperfusion based on CE-T1w. RESULTS: Average perfusion fraction ranged between 0.068 and 0.083 in areas with low perfusion fraction based on visual assessment, and between 0.256 and 0.335 in surrounding tissues (all p < 0.001). DSCs ranged from 0.714 to 0.734 between areas with low perfusion fraction and the CE-T1w derived non-perfused areas, with excellent intraobserver reliability of the delineated areas (ICC 0.97). CONCLUSION: The MR-HIFU treatment effect in uterine fibroids can be visualized using IVIM perfusion fraction mapping, in moderate concordance with contrast enhanced MRI. IVIM perfusion fraction mapping has therefore the potential to serve as a contrast agent-free imaging method to visualize the MR-HIFU treatment progression in uterine fibroids.


Asunto(s)
Leiomioma , Imagen por Resonancia Magnética , Femenino , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Perfusión , Leiomioma/diagnóstico por imagen , Leiomioma/cirugía
5.
ArXiv ; 2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38013886

RESUMEN

Objective: Dynamic cone-beam computed tomography (CBCT) can capture high-spatial-resolution, time-varying images for motion monitoring, patient setup, and adaptive planning of radiotherapy. However, dynamic CBCT reconstruction is an extremely ill-posed spatiotemporal inverse problem, as each CBCT volume in the dynamic sequence is only captured by one or a few X-ray projections, due to the slow gantry rotation speed and the fast anatomical motion (e.g., breathing). Approach: We developed a machine learning-based technique, prior-model-free spatiotemporal implicit neural representation (PMF-STINR), to reconstruct dynamic CBCTs from sequentially acquired X-ray projections. PMF-STINR employs a joint image reconstruction and registration approach to address the under-sampling challenge, enabling dynamic CBCT reconstruction from singular X-ray projections. Specifically, PMF-STINR uses spatial implicit neural representation to reconstruct a reference CBCT volume, and it applies temporal INR to represent the intra-scan dynamic motion with respect to the reference CBCT to yield dynamic CBCTs. PMF-STINR couples the temporal INR with a learning-based B-spline motion model to capture time-varying deformable motion during the reconstruction. Compared with the previous methods, the spatial INR, the temporal INR, and the B-spline model of PMF-STINR are all learned on the fly during reconstruction in a one-shot fashion, without using any patient-specific prior knowledge or motion sorting/binning. Main results: PMF-STINR was evaluated via digital phantom simulations, physical phantom measurements, and a multi-institutional patient dataset featuring various imaging protocols (half-fan/full-fan, full sampling/sparse sampling, different energy and mAs settings, etc.). The results showed that the one-shot learning-based PMF-STINR can accurately and robustly reconstruct dynamic CBCTs and capture highly irregular motion with high temporal (~0.1s) resolution and sub-millimeter accuracy. Significance: PMF-STINR can reconstruct dynamic CBCTs and solve the intra-scan motion from conventional 3D CBCT scans without using any prior anatomical/motion model or motion sorting/binning. It can be a promising tool for motion management by offering richer motion information than traditional 4D-CBCTs.

6.
J Appl Clin Med Phys ; 24(12): e14146, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37696265

RESUMEN

OBJECTIVES: The CyberKnife system is a robotic radiosurgery platform that allows the delivery of lung SBRT treatments using fiducial-free soft-tissue tracking. However, not all lung cancer patients are eligible for lung tumor tracking. Tumor size, density, and location impact the ability to successfully detect and track a lung lesion in 2D orthogonal X-ray images. The standard workflow to identify successful candidates for lung tumor tracking is called Lung Optimized Treatment (LOT) simulation, and involves multiple steps from CT acquisition to the execution of the simulation plan on CyberKnife. The aim of the study is to develop a deep learning classification model to predict which patients can be successfully treated with lung tumor tracking, thus circumventing the LOT simulation process. METHODS: Target tracking is achieved by matching orthogonal X-ray images with a library of digital radiographs reconstructed from the simulation CT scan (DRRs). We developed a deep learning model to create a binary classification of lung lesions as being trackable or untrackable based on tumor template DRR extracted from the CyberKnife system, and tested five different network architectures. The study included a total of 271 images (230 trackable, 41 untrackable) from 129 patients with one or multiple lung lesions. Eighty percent of the images were used for training, 10% for validation, and the remaining 10% for testing. RESULTS: For all five convolutional neural networks, the binary classification accuracy reached 100% after training, both in the validation and the test set, without any false classifications. CONCLUSIONS: A deep learning model can distinguish features of trackable and untrackable lesions in DRR images, and can predict successful candidates for fiducial-free lung tumor tracking.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Robótica , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Neoplasias Pulmonares/cirugía , Pulmón , Simulación por Computador
7.
Eur J Radiol ; 167: 111051, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37632999

RESUMEN

PURPOSE: Magnetic resonance imaging (MRI) can reduce the need for unnecessary invasive diagnostic tests by nearly half. In this meta-analysis, we investigated the diagnostic accuracy of intravoxel incoherent motion modeling (IVIM) and dynamic contrast-enhanced (DCE) MRI in differentiating benign from malignant breast lesions. METHOD: We systematically searched PubMed, EMBASE, and Scopus. We included English articles reporting diagnostic accuracy for both sequences in differentiating benign from malignant breast lesions. Articles were assessed by quality assessment of diagnostic accuracy studies-2 (QUADAS-2) questionnaire. We used a bivariate effects model for standardized mean difference (SMD) analysis and diagnostic test accuracy analysis. RESULTS: Ten studies with 537 patients and 707 (435 malignant and 272 benign) lesions were included. The D, f, Ktrans, and Kep mean values significantly differ between benign and malignant lesions. The pooled sensitivity (95 % confidence interval) and specificity were 86.2 % (77.9 %-91.7 %) and 70.3 % (56.5 %-81.1 %) for IVIM, and 93.8 % (85.3 %-97.5 %) and 68.1 % (52.7 %-80.4 %) for DCE, respectively. Combined IVIM and DCE depicted the highest area under the curve of 0.94, with a sensitivity and specificity of 91.8 % (82.8 %-96.3 %) and 87.6 % (73.8 %-94.7 %), respectively. CONCLUSIONS: Combined IVIM and DCE had the highest diagnostic accuracy, and multiparametric MRI may help reduce unnecessary benign breast biopsy.


Asunto(s)
Medios de Contraste , Imagen de Difusión por Resonancia Magnética , Humanos , Imagen de Difusión por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/métodos , Mama/diagnóstico por imagen , Sensibilidad y Especificidad , Movimiento (Física)
8.
Phys Med Biol ; 68(4)2023 02 06.
Artículo en Inglés | MEDLINE | ID: mdl-36638543

RESUMEN

Objective. Dynamic cone-beam CT (CBCT) imaging is highly desired in image-guided radiation therapy to provide volumetric images with high spatial and temporal resolutions to enable applications including tumor motion tracking/prediction and intra-delivery dose calculation/accumulation. However, dynamic CBCT reconstruction is a substantially challenging spatiotemporal inverse problem, due to the extremely limited projection sample available for each CBCT reconstruction (one projection for one CBCT volume).Approach. We developed a simultaneous spatial and temporal implicit neural representation (STINR) method for dynamic CBCT reconstruction. STINR mapped the unknown image and the evolution of its motion into spatial and temporal multi-layer perceptrons (MLPs), and iteratively optimized the neuron weightings of the MLPs via acquired projections to represent the dynamic CBCT series. In addition to the MLPs, we also introduced prior knowledge, in the form of principal component analysis (PCA)-based patient-specific motion models, to reduce the complexity of the temporal mapping to address the ill-conditioned dynamic CBCT reconstruction problem. We used the extended-cardiac-torso (XCAT) phantom and a patient 4D-CBCT dataset to simulate different lung motion scenarios to evaluate STINR. The scenarios contain motion variations including motion baseline shifts, motion amplitude/frequency variations, and motion non-periodicity. The XCAT scenarios also contain inter-scan anatomical variations including tumor shrinkage and tumor position change.Main results. STINR shows consistently higher image reconstruction and motion tracking accuracy than a traditional PCA-based method and a polynomial-fitting-based neural representation method. STINR tracks the lung target to an average center-of-mass error of 1-2 mm, with corresponding relative errors of reconstructed dynamic CBCTs around 10%.Significance. STINR offers a general framework allowing accurate dynamic CBCT reconstruction for image-guided radiotherapy. It is a one-shot learning method that does not rely on pre-training and is not susceptible to generalizability issues. It also allows natural super-resolution. It can be readily applied to other imaging modalities as well.


Asunto(s)
Neoplasias Pulmonares , Pulmón , Humanos , Movimiento (Física) , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Fantasmas de Imagen , Tomografía Computarizada de Haz Cónico/métodos , Algoritmos , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada Cuatridimensional/métodos
9.
Int J Soc Robot ; 15(4): 661-678, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-34249182

RESUMEN

In order to navigate safely and effectively with humans in close proximity, robots must be capable of predicting the future motions of humans. This study first consolidates human studies in motion, intention, and preference into a discretized human model that can readily be used in robotics decision making algorithms. Cooperative Markov Decision Process (Co-MDP), a novel framework that improves upon Multiagent MDPs, is then proposed for enabling socially aware robot obstacle avoidance. Utilizing the consolidated and discretized human model, Co-MDP allows the system to (1) approximate rational human behavior and intention, (2) generate socially-aware robotic obstacle avoidance behavior, and (3) remain robust to the uncertainty of human intention and motion variance. Simulations of a human-robot co-populated environment verify Co-MDP as a feasible obstacle avoidance algorithm. In addition, the anthropomorphic behavior of Co-MDP was assessed and confirmed with a human-in-the-loop experiment. Results reveal that participants can not directly differentiate agents that were controlled by human operators from Co-MDP, and the reported confidences of their choices indicates that the predictions from participants were backed by behavioral evidence rather than random guesses. Thus the main contributions for this paper are: consolidating past human studies of rational human behavior and intention into a simple, discretized model; the development of Co-MDP: a robotic decision framework that can utilize this human model and maximize the joint utility between the human and robot; and an experimental design for evaluation of the human acceptance of obstacle avoidance algorithms.

10.
Med Phys ; 50(2): 993-999, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36427355

RESUMEN

PURPOSE: To quantitatively evaluate the achievable performance of volumetric imaging based on lung motion modeling by principal component analysis (PCA). METHODS: In volumetric imaging based on PCA, internal deformation was represented as a linear combination of the eigenvectors derived by PCA of the deformation vector fields evaluated from patient-specific four-dimensional-computed tomography (4DCT) datasets. The volumetric image was synthesized by warping the reference CT image with a deformation vector field which was evaluated using optimal principal component coefficients (PCs). Larger PCs were hypothesized to reproduce deformations larger than those included in the original 4DCT dataset. To evaluate the reproducibility of PCA-reconstructed volumetric images synthesized to be close to the ground truth as possible, mean absolute error (MAE), structure similarity index measure (SSIM) and discrepancy of diaphragm position were evaluated using 22 4DCT datasets of nine patients. RESULTS: Mean MAE and SSIM values for the PCA-reconstructed volumetric images were approximately 80 HU and 0.88, respectively, regardless of the respiratory phase. In most test cases including the data of which motion range was exceeding that of the modeling data, the positional error of diaphragm was less than 5 mm. The results suggested that large deformations not included in the modeling 4DCT dataset could be reproduced. Furthermore, since the first PC correlated with the displacement of the diaphragm position, the first eigenvector became the dominant factor representing the respiration-associated deformations. However, other PCs did not necessarily change with the same trend as the first PC, and no correlation was observed between the coefficients. Hence, randomly allocating or sampling these PCs in expanded ranges may be applicable to reasonably generate an augmented dataset with various deformations. CONCLUSIONS: Reasonable accuracy of image synthesis comparable to those in the previous research were shown by using clinical data. These results indicate the potential of PCA-based volumetric imaging for clinical applications.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Análisis de Componente Principal , Reproducibilidad de los Resultados , Movimiento (Física) , Diagnóstico por Imagen , Respiración , Tomografía Computarizada Cuatridimensional/métodos
11.
J Shanghai Jiaotong Univ Sci ; : 1-10, 2022 Nov 12.
Artículo en Inglés | MEDLINE | ID: mdl-36406811

RESUMEN

Lung image registration plays an important role in lung analysis applications, such as respiratory motion modeling. Unsupervised learning-based image registration methods that can compute the deformation without the requirement of supervision attract much attention. However, it is noteworthy that they have two drawbacks: they do not handle the problem of limited data and do not guarantee diffeomorphic (topology-preserving) properties, especially when large deformation exists in lung scans. In this paper, we present an unsupervised few-shot learning-based diffeomorphic lung image registration, namely Dlung. We employ fine-tuning techniques to solve the problem of limited data and apply the scaling and squaring method to accomplish the diffeomorphic registration. Furthermore, atlas-based registration on spatio-temporal (4D) images is performed and thoroughly compared with baseline methods. Dlung achieves the highest accuracy with diffeomorphic properties. It constructs accurate and fast respiratory motion models with limited data. This research extends our knowledge of respiratory motion modeling.

12.
Med Image Anal ; 74: 102250, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34601453

RESUMEN

Shape and location organ variability induced by respiration constitutes one of the main challenges during dose delivery in radiotherapy. Providing up-to-date volumetric information during treatment can improve tumor tracking, thereby increasing treatment efficiency and reducing damage to healthy tissue. We propose a novel probabilistic model to address the problem of volumetric estimation with scalable predictive horizon from image-based surrogates during radiotherapy treatments, thus enabling out-of-plane tracking of targets. This problem is formulated as a conditional learning task, where the predictive variables are the 2D surrogate images and a pre-operative static 3D volume. The model learns a distribution of realistic motion fields over a population dataset. Simultaneously, a seq-2-seq inspired temporal mechanism acts over the surrogate images yielding extrapolated-in-time representations. The phase-specific motion distributions are associated with the predicted temporal representations, allowing the recovery of dense organ deformation in multiple times. Due to its generative nature, this model enables uncertainty estimations by sampling the latent space multiple times. Furthermore, it can be readily personalized to a new subject via fine-tuning, and does not require inter-subject correspondences. The proposed model was evaluated on free-breathing 4D MRI and ultrasound datasets from 25 healthy volunteers, as well as on 11 cancer patients. A navigator-based data augmentation strategy was used during the slice reordering process to increase model robustness against inter-cycle variability. The patient data was used as a hold-out test set. Our approach yields volumetric prediction from image surrogates with a mean error of 1.67 ± 1.68 mm and 2.17 ± 0.82 mm in unseen cases of the patient MRI and US datasets, respectively. Moreover, model personalization yields a mean landmark error of 1.4 ± 1.1 mm compared to ground truth annotations in the volunteer MRI dataset, with statistically significant improvements over state-of-the-art.


Asunto(s)
Radioterapia Guiada por Imagen , Humanos , Imagen por Resonancia Magnética , Modelos Estadísticos , Respiración , Ultrasonografía
13.
Phys Med Biol ; 66(8)2021 04 12.
Artículo en Inglés | MEDLINE | ID: mdl-33725676

RESUMEN

Abdominal organ motions introduce geometric uncertainties to gastrointestinal radiotherapy. This study investigated slow drifting motion induced by changes of internal anatomic organ arrangements using a 3D radial MRI sequence with a scan length of 20 min. Breathing motion and cyclic GI motion were first removed through multi-temporal resolution image reconstruction. Slow drifting motion analysis was performed using an image time series consisting of 72 image volumes with a temporal sampling rate of 17 s. B-spline deformable registration was performed to align image volumes of the time series to a reference volume. The resulting deformation fields were used for motion velocity evaluation and patient-specific motion model construction through principal component analysis (PCA). Geometric uncertainties introduced by slow drifting motion were assessed by Hausdorff distances between unions of organs at risk (OARs) at different motion states and reference OAR contours as well as probabilistic distributions of OARs predicted using the PCA model. Thirteen examinations from 11 patients were included in this study. The averaged motion velocities ranged from 0.8 to 1.9 mm min-1, 0.7 to 1.6 mm min-1, 0.6 to 2.0 mm min-1and 0.7 to 1.4 mm min-1for the small bowel, colon, duodenum and stomach respectively; the averaged Hausdorff distances were 5.6 mm, 5.3 mm, 5.1 mm and 4.6 mm. On average, a margin larger than 4.5 mm was needed to cover a space with OAR occupancy probability higher than 55%. Temporal variations of geometric uncertainties were evaluated by comparing across four 5 min sub-scans extracted from the full scan. Standard deviations of Hausdorff distances across sub-scans were less than 1 mm for most examinations, indicating stability of relative margin estimates from separate time windows. These results suggested slow drifting motion of GI organs is significant and geometric uncertainties introduced by such motion should be accounted for during radiotherapy planning and delivery.


Asunto(s)
Imagen por Resonancia Magnética , Respiración , Humanos , Procesamiento de Imagen Asistido por Computador , Movimiento (Física) , Órganos en Riesgo
14.
Med Phys ; 48(4): 1823-1831, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33550622

RESUMEN

PURPOSE: To quantify the use of anterior torso skin surface position measurement as a breathing surrogate. METHODS: Fourteen patients were scanned 25 times in alternating directions using a free-breathing low-mA fast helical CT protocol. Simultaneously, an abdominal pneumatic bellows was used as a real-time breathing surrogate. The imaged diaphragm dome position was used as a gold standard surrogate, characterized by localizing the most superior points of the diaphragm dome in each lung. These positions were correlated against the bellows signal acquired at the corresponding scan times. The bellows system has been shown to have a slow linear drift, and the bellows-to-CT synchronization process had a small uncertainty, so the drift and time offset were determined by maximizing the correlation coefficient between the craniocaudal diaphragm position and the drift-corrected bellows signal. The corresponding fit was used to model the real-time diaphragm position. To estimate the effectiveness of skin surface positions as surrogates, the anterior torso surface position was measured from the CT scans and correlated against the diaphragm position model. The residual error was defined as the root-mean-square correlation residual with the breathing amplitude normalized to the 5th to 95th breathing amplitude percentiles. The fit residual errors were analyzed over the surface for the fourteen studied patients and reported as percentages of the 5th to 95th percentile ranges. RESULTS: A strong correlation was measured between the diaphragm motion and the abdominal bellows signal with an average residual error of 9.21% and standard deviation of 3.77%. In contrast, the correlations between the diaphragm position model and patient surface positions varied throughout the torso and from patient to patient. However, a consistently high correlation was found near the abdomen for each patient, and the average minimum residual error relating the skin surface to the diaphragm was 11.8% with a standard deviation of 4.61%. CONCLUSIONS: The thoracic patient surface was found to be an accurate surrogate, but the accuracy varied across the surface sufficiently that care would need to be taken to use the surface as an accurate and reliable surrogate. Future studies will use surface imaging to determine surface patch algorithms that utilize the entire chest as well as thoracic and abdominal breathing relationships.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada Espiral , Humanos , Pulmón/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Movimiento (Física) , Movimiento , Respiración , Tomografía Computarizada por Rayos X
15.
Med Phys ; 48(2): 597-604, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32990373

RESUMEN

PURPOSE: To develop a method for continuous online dose accumulation during irradiation in MRI-guided radiation therapy (MRgRT) and to demonstrate its application in evaluating the impact of internal organ motion on cumulative dose. METHODS: An intensity-modulated radiation therapy (IMRT) treatment plan is partitioned into its unique apertures. Dose for each planned aperture is calculated using Monte Carlo dose simulation on each phase of a four-dimensional computed tomography (4D-CT) dataset. Deformable image registration is then performed both (a) between each frame of a cine-MRI acquisition obtained during treatment and a reference frame, and (b) between each volume of the 4D-CT phases and a reference phase. These registrations are used to associate each cine image with a 4D-CT phase. Additionally, for each 4D-CT phase, the deformation vector field (DVF) is used to warp the pre-calculated dose volumes per aperture onto the reference CT dataset. To estimate the dose volume delivered during each frame of the cine-MRI acquisition, we retrieve the pre-calculated warped dose volume for the delivered aperture on the associated 4D-CT phase and adjust it by a rigid translation to account for baseline drift and instances where motion on the cine image exceeds the amplitude observed between 4D-CT phases. RESULTS: The proposed dose accumulation method is retrospectively applied to a liver cancer case previously treated on an MRgRT platform. Cumulative dose estimated for free-breathing and respiration-gated delivery is compared against dose calculated on static anatomy. In this sample case, the target minimum dose and D 98 varied by as much as 5% and 7%, respectively. CONCLUSION: We demonstrate a technique suitable for continuous online dose accumulation during MRgRT. In contrast to other approaches, dose is pre-calculated per aperture and phase and then retrieved based on a mapping scheme between cine MRI and 4D-CT datasets, aiming at reducing the computational burden for potential real-time applications.


Asunto(s)
Neoplasias Pulmonares , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada Cuatridimensional , Humanos , Imagen por Resonancia Magnética , Movimiento (Física) , Movimientos de los Órganos , Respiración , Estudios Retrospectivos
16.
Neural Netw ; 132: 521-531, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33039789

RESUMEN

Recently, considerable research has focused on personal assistant robots, and robots capable of rich human-like communication are expected. Among humans, non-verbal elements contribute to effective and dynamic communication. However, people use a wide range of diverse gestures, and a robot capable of expressing various human gestures has not been realized. In this study, we address human behavior modeling during interaction using a deep generative model. In the proposed method, to consider interaction motion, three factors, i.e., interaction intensity, time evolution, and time resolution, are embedded in the network structure. Subjective evaluation results suggest that the proposed method can generate high-quality human motions.


Asunto(s)
Gestos , Redes Neurales de la Computación , Robótica/métodos , Simulación por Computador , Humanos
17.
Sensors (Basel) ; 20(14)2020 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-32708706

RESUMEN

Aiming at the poor accuracy and difficult verification of maneuver modeling induced by the wind, waves and sea surface currents in the actual sea, a novel sea trials correction method for ship maneuvering is proposed. The wind and wave drift forces are calculated according to the measurement data. Based on the steady turning hypothesis and pattern search algorithm, the adjustment parameters of wind, wave and sea surface currents were solved, the drift distances and drift velocities of wind, waves and sea surface currents were calculated and the track and velocity data of the experiment were corrected. The hydrodynamic coefficients were identified by the test data and the ship maneuvering motion model was established. The results show that the corrected data were more accurate than log data, the hydrodynamic coefficients can be completely identified, the prediction accuracy of the advance and tactical diameters were 93% and 97% and the prediction of the maneuvering model was accurate. Numerical cases verify the correction method and full-scale maneuvering model. The turning circle advance and tactical diameter satisfy the standards of the ship maneuverability of International Maritime Organization (IMO).

18.
Quant Imaging Med Surg ; 10(2): 432-450, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32190569

RESUMEN

BACKGROUND: The purpose of this study is to improve on-board volumetric cine magnetic resonance imaging (VC-MRI) using multi-slice undersampled cine images reconstructed using spatio-temporal k-space data, patient prior 4D-MRI, motion modeling (MM) and free-form deformation (FD) for real-time 3D target verification of liver and lung radiotherapy. METHODS: A previous method was developed to generate on-board VC-MRI by deforming prior MRI images based on a MM and a single-slice on-board 2D-cine image. The two major improvements over the previous method are: (I) FD was introduced to estimate VC-MRI to correct for inaccuracies in the MM; (II) multi-slice undersampled 2D-cine images reconstructed by a k-t SLR reconstruction method were used for FD-based estimation to maintain the temporal resolution while improving the accuracy of VC-MRI. The method was evaluated using XCAT lung simulation and four liver patients' data. RESULTS: For XCAT, VC-MRI estimated using ten undersampled sagittal 2D-cine MRIs resulted in volume percent difference/volume dice coefficient/center-of-mass shift of 9.77%±3.71%/0.95±0.02/0.75±0.26 mm among all scenarios based on estimation with MM and FD. Adding FD optimization improved VC-MRI accuracy substantially for scenarios with anatomical changes. For patient data, the mean tumor tracking errors were 0.64±0.51, 0.62±0.47 and 0.24±0.24 mm along the superior-inferior (SI), anterior-posterior (AP) and lateral directions, respectively, across all liver patients. CONCLUSIONS: It is feasible to improve VC-MRI accuracy while maintaining high temporal resolution using FD and multi-slice undersampled 2D cine images for real-time 3D target verification.

19.
Quant Imaging Med Surg ; 9(7): 1337-1349, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31448218

RESUMEN

BACKGROUND: Pre-treatment liver tumor localization remains a challenging task for radiation therapy, mostly due to the limited tumor contrast against normal liver tissues, and the respiration-induced liver tumor motion. Recently, we developed a biomechanical modeling-based, deformation-driven cone-beam CT estimation technique (Bio-CBCT), which achieved substantially improved accuracy on low-contrast liver tumor localization. However, the accuracy of Bio-CBCT is still affected by the limited tissue contrast around the caudal liver boundary, which reduces the accuracy of the boundary condition that is fed into the biomechanical modeling process. In this study, we developed a motion modeling and biomechanical modeling-guided CBCT estimation technique (MM-Bio-CBCT), to further improve the liver tumor localization accuracy by incorporating a motion model into the CBCT estimation process. METHODS: MM-Bio-CBCT estimates new CBCT images through deforming a prior high-quality CT or CBCT volume. The deformation vector field (DVF) is solved by iteratively matching the digitally-reconstructed-radiographs (DRRs) of the deformed prior image to the acquired 2D cone-beam projections. Using the same solved DVF, the liver tumor volume contoured on the prior image can be transferred onto the new CBCT image for automatic tumor localization. To maximize the accuracy of the solved DVF, MM-Bio-CBCT employs two strategies for additional DVF optimization: (I) prior-knowledge-guided liver boundary motion modeling with motion patterns extracted from a prior 4D imaging set like 4D-CTs/4D-CBCTs, to improve the liver boundary DVF accuracy; and (II) finite-element-analysis-based biomechanical modeling of the liver volume to improve the intra-liver DVF accuracy. We evaluated the accuracy of MM-Bio-CBCT on both the digital extended-cardiac-torso (XCAT) phantom images and real liver patient images. The liver tumor localization accuracy of MM-Bio-CBCT was evaluated and compared with that of the purely intensity-driven 2D-3D deformation technique, the 2D-3D deformation technique with motion modeling, and the Bio-CBCT technique. Metrics including the DICE coefficient and the center-of-mass-error (COME) were assessed for quantitative evaluation. RESULTS: Using limited-view 20 projections for CBCT estimation, the average (± SD) DICE coefficients between the estimated and the 'gold-standard' liver tumors of the XCAT study were 0.57±0.31, 0.78±0.26, 0.83±0.21, and 0.89±0.11 for 2D-3D deformation, 2D-3D deformation with motion modeling, Bio-CBCT and MM-Bio-CBCT techniques, respectively. Using 20 projections for estimation, the patient study yielded average DICE results of 0.63±0.21, 0.73±0.13 and 0.78±0.12, and 0.83±0.09, correspondingly. The MM-Bio-CBCT localized the liver tumor to an average COME of ~2 mm for both the XCAT and the liver patient studies. CONCLUSIONS: Compared to Bio-CBCT, MM-Bio-CBCT further improves the accuracy of liver tumor localization. MM-Bio-CBCT can potentially be used towards pre-treatment liver tumor localization and intra-treatment liver tumor location verification to achieve substantial radiotherapy margin reduction.

20.
Phys Med ; 63: 25-34, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31221405

RESUMEN

We present a technique for continuous generation of volumetric images during SBRT using periodic kV imaging and an external respiratory surrogate signal to drive a patient-specific PCA motion model. Using the on-board imager, kV radiographs are acquired every 3 s and used to fit the parameters of a motion model so that it matches observed changes in internal patient anatomy. A multi-dimensional correlation model is established between the motion model parameters and the external surrogate position and velocity, enabling volumetric image reconstruction between kV imaging time points. Performance of the algorithm was evaluated using 10 realistic eXtended CArdiac-Torso (XCAT) digital phantoms including 3D anatomical respiratory deformation programmed with 3D tumor positions measured with orthogonal kV imaging of implanted fiducial gold markers. The clinically measured ground truth 3D tumor positions provided a dataset with realistic breathing irregularities, and the combination of periodic on-board kV imaging with recorded external respiratory surrogate signal was used for correlation modeling to account for any changes in internal-external correlation. The three-dimensional tumor positions are reconstructed with an average root mean square error (RMSE) of 1.47 mm, and an average 95th percentile 3D positional error of 2.80 mm compared with the clinically measured ground truth 3D tumor positions. This technique enables continuous 3D anatomical image generation based on periodic kV imaging of internal anatomy without the additional dose of continuous kV imaging. The 3D anatomical images produced using this method can be used for treatment verification and delivered dose computation in the presence of irregular respiratory motion.


Asunto(s)
Tomografía Computarizada Cuatridimensional/instrumentación , Fantasmas de Imagen , Radiocirugia , Planificación de la Radioterapia Asistida por Computador/instrumentación , Respiración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA