Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Phys Med Biol ; 69(8)2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38382107

RESUMEN

Objective.To improve respiratory gating accuracy and radiation treatment throughput, we developed a generalized model based on a deep neural network (DNN) for predicting any given patient's respiratory motion.Approach.Our model uses long short-term memory (LSTM) based on a recurrent neural network (RNN), and improves upon common techniques. The first improvement is that the data input is not a one-dimensional sequence, but two-dimensional block data. This shortens the input sequence length, reducing computation time. Second, the output is not a scalar, but a sequence prediction. This increases the amount of available data, allowing improved prediction accuracy. For training and evaluation of our model, 434 sets of real-time position management data were retrospectively collected from clinical studies. The data were separated in a ratio of 4:1, with the larger set used for training models and the remaining set used for testing. We measured the accuracy of respiratory signal prediction and amplitude-based gating with prediction windows equaling 133, 333, and 533 ms. This new model was compared with the original LSTM and a non-recurrent DNN model.Main results.The mean absolute errors with the prediction window at 133, 333 and 533 ms were 0.036, 0.084, 0.119 with our model; 0.049, 0.14, 0.246 with the original LSTM-based model; and 0.041, 0.119, 0.16 with the non-recurrent DNN model, respectively. The computation time were 0.66 ms with our model; 0.63 ms the original LSTM-based model; 1.60 ms the non-recurrent DNN model, respectively. The accuracies of amplitude-based gating with the same prediction window settings and a duty cycle of approximately 50% were 98.3%, 95.8% and 92.7% with our model, 97.6%, 93.9% and 87.2% with the original LSTM-based model; and 97.9%, 94.3% and 89.5% with the non-recurrent DNN model, respectively.Significance.Our RNN algorithm for respiratory signal prediction successfully estimated tumor positions. We believe it will be useful in respiratory signal prediction technology.


Asunto(s)
Neoplasias , Redes Neurales de la Computación , Humanos , Estudios Retrospectivos , Algoritmos , Frecuencia Respiratoria , Neoplasias/radioterapia
2.
Phys Med Biol ; 69(2)2024 Jan 08.
Artículo en Inglés | MEDLINE | ID: mdl-38091621

RESUMEN

Objective.The prostate moves in accordance with the movement of surrounding organs. Tumor position can change by ≥3 mm during radiotherapy. Given the difficulties of visualizing the prostate fluoroscopically, fiducial markers are generally implanted into the prostate to monitor its motion during treatment. Recently, internally motion guidance methods of the prostate using a 99.5% gold/0.5% iron flexible notched wire fiducial marker (Gold Anchor® , Naslund Medical AB, Huddinge, Sweden), which requires a 22 gauge needle, has been used. However, because the notched wire can retain its linear shape, acquire a spiral shape, or roll into an irregular ball, detecting it on fluoroscopic images in real-time incurs higher computation costs.Approach.We developed a fiducial tracking algorithm to achieve real-time computation. The marker is detected on the first image frame using a shape filter that employs inter-class variance for the marker likelihood calculated by the filter, focusing on the large difference in densities between the marker and its surroundings. After the second frame, the marker is tracked by adding to the shape filter the similarity to the template cropped from the area around the marker position detected in the first frame. We retrospectively evaluated the algorithm's marker tracking accuracy for ten prostate cases, analyzing two fractions in each case.Main results.Tracking positional accuracy averaged over all patients was 0.13 ± 0.04 mm (mean ± standard deviation, Euclidean distance) and 0.25 ± 0.09 mm (95th percentile). Computation time was 2.82 ± 0.20 ms/frame averaged over all frames.Significance.Our algorithm successfully and stably tracked irregularly-shaped markers in real time.


Asunto(s)
Neoplasias de la Próstata , Radioterapia Guiada por Imagen , Masculino , Humanos , Marcadores Fiduciales , Próstata , Oro , Estudios Retrospectivos , Rayos X , Planificación de la Radioterapia Asistida por Computador , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/patología , Radioterapia Guiada por Imagen/métodos
3.
Phys Eng Sci Med ; 46(4): 1563-1572, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37639109

RESUMEN

We sought to accelerate 2D/3D image registration computation time using image synthesis with a deep neural network (DNN) to generate digitally reconstructed radiographic (DRR) images from X-ray flat panel detector (FPD) images. And we explored the feasibility of using our DNN in the patient setup verification application. Images of the prostate and of the head and neck (H&N) regions were acquired by two oblique X-ray fluoroscopic units and the treatment planning CT. DNN was designed to generate DRR images from the FPD image data. We evaluated the quality of the synthesized DRR images to compare the ground-truth DRR images using the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). Image registration accuracy and computation time were evaluated by comparing the 2D-3D image registration algorithm using DRR and FPD image data with DRR and synthesized DRR images. Mean PSNR values were 23.4 ± 3.7 dB and 24.1 ± 3.9 dB for the pelvic and H&N regions, respectively. Mean SSIM values for both cases were also similar (= 0.90). Image registration accuracy was degraded by a mean of 0.43 mm and 0.30°, it was clinically acceptable. Computation time was accelerated by a factor of 0.69. Our DNN successfully generated DRR images from FPD image data, and improved 2D-3D image registration computation time up to 37% in average.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Masculino , Humanos , Cuello , Imagenología Tridimensional/métodos , Cabeza
4.
Phys Eng Sci Med ; 46(3): 1227-1237, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37349631

RESUMEN

We developed a deep neural network (DNN) to generate X-ray flat panel detector (FPD) images from digitally reconstructed radiographic (DRR) images. FPD and treatment planning CT images were acquired from patients with prostate and head and neck (H&N) malignancies. The DNN parameters were optimized for FPD image synthesis. The synthetic FPD images' features were evaluated to compare to the corresponding ground-truth FPD images using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM). The image quality of the synthetic FPD image was also compared with that of the DRR image to understand the performance of our DNN. For the prostate cases, the MAE of the synthetic FPD image was improved (= 0.12 ± 0.02) from that of the input DRR image (= 0.35 ± 0.08). The synthetic FPD image showed higher PSNRs (= 16.81 ± 1.54 dB) than those of the DRR image (= 8.74 ± 1.56 dB), while SSIMs for both images (= 0.69) were almost the same. All metrics for the synthetic FPD images of the H&N cases were improved (MAE 0.08 ± 0.03, PSNR 19.40 ± 2.83 dB, and SSIM 0.80 ± 0.04) compared to those for the DRR image (MAE 0.48 ± 0.11, PSNR 5.74 ± 1.63 dB, and SSIM 0.52 ± 0.09). Our DNN successfully generated FPD images from DRR images. This technique would be useful to increase throughput when images from two different modalities are compared by visual inspection.


Asunto(s)
Neoplasias de Cabeza y Cuello , Tomografía Computarizada por Rayos X , Masculino , Humanos , Tomografía Computarizada por Rayos X/métodos , Redes Neurales de la Computación , Relación Señal-Ruido , Fluoroscopía
5.
Sci Rep ; 13(1): 7448, 2023 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-37156901

RESUMEN

To perform setup procedures including both positional and dosimetric information, we developed a CT-CT rigid image registration algorithm utilizing water equivalent pathlength (WEPL)-based image registration and compared the resulting dose distribution with those of two other algorithms, intensity-based image registration and target-based image registration, in prostate cancer radiotherapy using the carbon-ion pencil beam scanning technique. We used the data of the carbon ion therapy planning CT and the four-weekly treatment CTs of 19 prostate cancer cases. Three CT-CT registration algorithms were used to register the treatment CTs to the planning CT. Intensity-based image registration uses CT voxel intensity information. Target-based image registration uses target position on the treatment CTs to register it to that on the planning CT. WEPL-based image registration registers the treatment CTs to the planning CT using WEPL values. Initial dose distributions were calculated using the planning CT with the lateral beam angles. The treatment plan parameters were optimized to administer the prescribed dose to the PTV on the planning CT. Weekly dose distributions using the three different algorithms were calculated by applying the treatment plan parameters to the weekly CT data. Dosimetry, including the dose received by 95% of the clinical target volume (CTV-D95), rectal volumes receiving > 20 Gy (RBE) (V20), > 30 Gy (RBE) (V30), and > 40 Gy (RBE) (V40), were calculated. Statistical significance was assessed using the Wilcoxon signed-rank test. Interfractional CTV displacement over all patients was 6.0 ± 2.7 mm (19.3 mm maximum standard amount). WEPL differences between the planning CT and the treatment CT were 1.2 ± 0.6 mm-H2O (< 3.9 mm-H2O), 1.7 ± 0.9 mm-H2O (< 5.7 mm-H2O) and 1.5 ± 0.7 mm-H2O (< 3.6 mm-H2O maxima) with the intensity-based image registration, target-based image registration, and WEPL-based image registration, respectively. For CTV coverage, the D95 values on the planning CT were > 95% of the prescribed dose in all cases. The mean CTV-D95 values were 95.8 ± 11.5% and 98.8 ± 1.7% with the intensity-based image registration and target-based image registration, respectively. The WEPL-based image registration was CTV-D95 to 99.0 ± 0.4% and rectal Dmax to 51.9 ± 1.9 Gy (RBE) compared to 49.4 ± 9.1 Gy (RBE) with intensity-based image registration and 52.2 ± 1.8 Gy (RBE) with target-based image registration. The WEPL-based image registration algorithm improved the target coverage from the other algorithms and reduced rectal dose from the target-based image registration, even though the magnitude of the interfractional variation was increased.


Asunto(s)
Radioterapia de Iones Pesados , Neoplasias de la Próstata , Radioterapia de Intensidad Modulada , Masculino , Humanos , Próstata , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Neoplasias de la Próstata/tratamiento farmacológico , Radioterapia de Intensidad Modulada/métodos , Carbono/uso terapéutico , Órganos en Riesgo
6.
Phys Eng Sci Med ; 46(2): 659-668, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36944832

RESUMEN

Since particle beam distribution is vulnerable to change in bowel gas because of its low density, we developed a deep neural network (DNN) for bowel gas segmentation on X-ray images. We used 6688 image datasets from 209 cases as training data, 736 image datasets from 23 cases as validation data and 102 image datasets from 51 cases as test data (total 283 cases). For the training data, we prepared three types of digitally reconstructed radiographic (DRR) images (all-density, bone and gas) by projecting the treatment planning CT image data. However, the real X-ray images acquired in the treatment room showed low contrast that interfered with manual delineation of bowel gas. Therefore, we used synthetic X-ray images converted from DRR images in addition to real X-ray images.We evaluated DNN segmentation accuracy for the synthetic X-ray images using Intersection over Union, recall, precision, and the Dice coefficient, which measured 0.708 ± 0.208, 0.832 ± 0.170, 0.799 ± 0.191, and 0.807 ± 0.178, respectively. The evaluation metrics for the real X-images were less accurate than those for the synthetic X-ray images (0.408 ± 0237, 0.685 ± 0.326, 0.490 ± 0272, and 0.534 ± 0.271, respectively). Computation time was 29.7 ± 1.3 ms/image. Our DNN appears useful in increasing treatment accuracy in particle beam therapy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Rayos X , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
7.
Phys Med ; 80: 151-158, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33189045

RESUMEN

INTRODUCTION: Our markerless tumor tracking algorithm requires 4DCT data to train models. 4DCT cannot be used for markerless tracking for respiratory-gated treatment due to inaccuracies and a high radiation dose. We developed a deep neural network (DNN) to generate 4DCT from 3DCT data. METHODS: We used 2420 thoracic 4DCT datasets from 436 patients to train a DNN, designed to export 9 deformation vector fields (each field representing one-ninth of the respiratory cycle) from each CT dataset based on a 3D convolutional autoencoder with shortcut connections using deformable image registration. Then 3DCT data at exhale were transformed using the predicted deformation vector fields to obtain simulated 4DCT data. We compared markerless tracking accuracy between original and simulated 4DCT datasets for 20 patients. Our tracking algorithm used a machine learning approach with patient-specific model parameters. For the training stage, a pair of digitally reconstructed radiography images was generated using 4DCT for each patient. For the prediction stage, the tracking algorithm calculated tumor position using incoming fluoroscopic image data. RESULTS: Diaphragmatic displacement averaged over 40 cases for the original 4DCT were slightly higher (<1.3 mm) than those for the simulated 4DCT. Tracking positional errors (95th percentile of the absolute value of displacement, "simulated 4DCT" minus "original 4DCT") averaged over the 20 cases were 0.56 mm, 0.65 mm, and 0.96 mm in the X, Y and Z directions, respectively. CONCLUSIONS: We developed a DNN to generate simulated 4DCT data that are useful for markerless tumor tracking when original 4DCT is not available. Using this DNN would accelerate markerless tumor tracking and increase treatment accuracy in thoracoabdominal treatment.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Neoplasias , Algoritmos , Tomografía Computarizada Cuatridimensional , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias/diagnóstico por imagen , Redes Neurales de la Computación
8.
Phys Med Biol ; 65(8): 085014, 2020 04 23.
Artículo en Inglés | MEDLINE | ID: mdl-32097899

RESUMEN

To improve respiratory-gated radiotherapy accuracy, we developed a machine learning approach for markerless tumor tracking and evaluated it using lung cancer patient data. Digitally reconstructed radiography (DRR) datasets were generated using planning 4DCT data. Tumor positions were selected on respective DRR images to place the GTV center of gravity in the center of each DRR. DRR subimages around the tumor regions were cropped so that the subimage size was defined by tumor size. Training data were then classified into two groups: positive (including tumor) and negative (not including tumor) samples. Machine learning parameters were optimized by the extremely randomized tree method. For the tracking stage, a machine learning algorithm was generated to provide a tumor likelihood map using fluoroscopic images. Prior probability tumor positions were also calculated using the previous two frames. Tumor position was then estimated by calculating maximum probability on the tumor likelihood map and prior probability tumor positions. We acquired treatment planning 4DCT images in eight patients. Digital fluoroscopic imaging systems on either side of the vertical irradiation port allowed fluoroscopic image acquisition during treatment delivery. Each fluoroscopic dataset was acquired at 15 frames per second. We evaluated the tracking accuracy and computation times. Tracking positional accuracy averaged over all patients was 1.03 ± 0.34 mm (mean ± standard deviation, Euclidean distance) and 1.76 ± 0.71 mm ([Formula: see text] percentile). Computation time was 28.66 ± 1.89 ms/frame averaged over all frames. Our markerless algorithm successfully estimated tumor position in real time.


Asunto(s)
Fluoroscopía , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Aprendizaje Automático , Radioterapia Guiada por Imagen/métodos , Humanos , Procesamiento de Imagen Asistido por Computador , Funciones de Verosimilitud , Factores de Tiempo
9.
Phys Med ; 70: 196-205, 2020 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32045869

RESUMEN

PURPOSE: We have developed a new method to track tumor position using fluoroscopic images, and evaluated it using hepatocellular carcinoma case data. METHODS: Our method consists of a training stage and a tracking stage. In the training stage, the model data for the positional relationship between the diaphragm and the tumor are calculated using four-dimensional computed tomography (4DCT) data. The diaphragm is detected along a straight line, which was chosen to avoid 4DCT artifact. In the tracking stage, the tumor position on the fluoroscopic images is calculated by applying the model to the diaphragm. Using data from seven liver cases, we evaluated four metrics: diaphragm edge detection error, modeling error, patient setup error, and tumor tracking error. We measured tumor tracking error for the 15 fluoroscopic sequences from the cases and recorded the computation time. RESULTS: The mean positional error in diaphragm tracking was 0.57 ± 0.62 mm. The mean positional error in tumor tracking in three-dimensional (3D) space was 0.63 ± 0.30 mm by modeling error, and 0.81-2.37 mm with 1-2 mm setup error. The mean positional error in tumor tracking in the fluoroscopy sequences was 1.30 ± 0.54 mm and the mean computation time was 69.0 ± 4.6 ms and 23.2 ± 1.3 ms per frame for the training and tracking stages, respectively. CONCLUSIONS: Our markerless tracking method successfully estimated tumor positions. We believe our results will be useful in increasing treatment accuracy for liver cases.


Asunto(s)
Carcinoma Hepatocelular/diagnóstico por imagen , Fluoroscopía/métodos , Tomografía Computarizada Cuatridimensional/métodos , Neoplasias Hepáticas/diagnóstico por imagen , Radioterapia Guiada por Imagen/métodos , Algoritmos , Artefactos , Biomarcadores de Tumor/metabolismo , Carcinoma Hepatocelular/radioterapia , Diafragma/metabolismo , Humanos , Cinética , Neoplasias Hepáticas/radioterapia , Modelos Teóricos , Análisis de Regresión
10.
Phys Med ; 65: 67-75, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31430590

RESUMEN

INTRODUCTION: Breathing artifact may affect the quality of four-dimensional computed tomography (4DCT) images. We developed a deep neural network (DNN)-based artifact reduction method. METHODS: We used 857 thoracoabdominal 4DCT data sets scanned with 320-section CT with no 4DCT artifact within any volume (ground-truth image). The limitations of graphics processing unit (GPU) memory prevent importation of CT volume data into the DNN. To simulate 4DCT artifact, we interposed 4DCT images from other breathing phases at selected couch positions. Two DNNs, DNN1 and DNN2, were trained to maintain the quality of the output image to that of the ground truth by importing a single and 10 CT images, respectively. A third DNN consisting of an artifact classifier and image generator networks was added. The classifier network was based on residual networks and trained to detect CT section interposition-caused artifacts (artifact map). The generator network reduced artifacts by importing the coronal image data and the artifact map. RESULTS: By repeating the 4DCT artifact reduction with coronal images, the geometrical accuracy in the sagittal sections could be improved, especially with DNN3. Diaphragm position was most accurate when DNN3 was applied. DNN2 corrected artifacts by using CT images from other phases, but DNN2 also modified artifact-free regions. CONCLUSIONS: Additional information related to the 4DCT artifact, including information from other respiratory phases (DNN2) and/or artifact regions (DNN3), provided substantial improvement over DNN1. Interposition-related artifacts were reduced by use of an artifact positional map (DNN3).


Asunto(s)
Artefactos , Aprendizaje Profundo , Tomografía Computarizada Cuatridimensional/métodos , Radioterapia Guiada por Imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Neoplasias/diagnóstico por imagen , Neoplasias/radioterapia , Control de Calidad
11.
Phys Med ; 59: 22-29, 2019 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-30928062

RESUMEN

PURPOSE: To improve respiratory gating accuracy and treatment throughput, we developed a fluoroscopic markerless tumor tracking algorithm based on a deep neural network (DNN). METHODS: In the learning stage, target positions were projected onto digitally reconstructed radiography (DRR) images from four-dimensional computed tomography (4DCT). DRR images were cropped into subimages of the target or surrounding regions to build a network that takes input of the image pattern of subimages and produces a target probability map (TPM) for estimating the target position. Using multiple subimages, a DNN was trained to generate a TPM based on the target position projected onto the DRRs. In the tracking stage, the network takes in the subimages cropped from fluoroscopic images at the same position of the subimages on the DRRs and produces TPMs, which are used to estimate target positions. We integrated the lateral correction to modify an estimated target position by using a linear regression model. We tracked five lung and five liver cases, and calculated tracking accuracy (Euclidian distance in 3D space) by subtracting the estimated position from the reference. RESULTS: Tracking accuracy averaged over all patients was 1.64 ±â€¯0.73 mm. Accuracy for liver cases (1.37 ±â€¯0.81 mm) was better than that for lung cases (1.90 ±â€¯0.65 mm). Computation time was <40 ms for a pair of fluoroscopic images. CONCLUSIONS: Our markerless tracking algorithm successfully estimated tumor positions. We believe our results will provide useful information to advance tumor tracking technology.


Asunto(s)
Fluoroscopía , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación , Tomografía Computarizada Cuatridimensional , Humanos , Procesamiento de Imagen Asistido por Computador , Estudios Retrospectivos , Factores de Tiempo
12.
Med Phys ; 46(4): 1561-1574, 2019 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-30689205

RESUMEN

PURPOSE: To perform the final quality assurance of our fluoroscopic-based markerless tumor tracking for gated carbon-ion pencil beam scanning (C-PBS) radiotherapy using a rotating gantry system, we evaluated the geometrical accuracy and tumor tracking accuracy using a moving chest phantom with simulated respiration. METHODS: The positions of the dynamic flat panel detector (DFPD) and x-ray tube are subject to changes due to gantry sag. To compensate for this, we generated a geometrical calibration table (gantry flex map) in 15° gantry angle steps by the bundle adjustment method. We evaluated five metrics: (a) Geometrical calibration was evaluated by calculating chest phantom positional error using 2D/3D registration software for each 5° step of the gantry angle. (b) Moving phantom displacement accuracy was measured (±10 mm in 1-mm steps) with a laser sensor. (c) Tracking accuracy was evaluated with machine learning (ML) and multi-template matching (MTM) algorithms, which used fluoroscopic images and digitally reconstructed radiographic (DRR) images as training data. The chest phantom was continuously moved ±10 mm in a sinusoidal path with a moving cycle of 4 s and respiration was simulated with ±5 mm expansion/contraction with a cycle of 2 s. This was performed with the gantry angle set at 0°, 45°, 120°, and 240°. (d) Four types of interlock function were evaluated: tumor velocity, DFPD image brightness variation, tracking anomaly detection, and tracking positional inconsistency in between the two corresponding rays. (e) Gate on/off latency, gating control system latency, and beam irradiation latency were measured using a laser sensor and an oscilloscope. RESULTS: By applying the gantry flex map, phantom positional accuracy was improved from 1.03 mm/0.33° to <0.45 mm/0.27° for all gantry angles. The moving phantom displacement error was 0.1 mm. Due to long computation time, the tracking accuracy achieved with ML was <0.49 mm (=95% confidence interval [CI]) for imaging rates of 15 and 7.5 fps; those at 30 fps were decreased to 1.84 mm (95% CI: 1.79 mm-1.92 mm). The tracking positional accuracy with MTM was <0.52 mm (=95% CI) for all gantry angles and imaging frame rates. The tumor velocity interlock signal delay time was 44.7 ms (=1.3 frame). DFPD image brightness interlock latency was 34 ms (=1.0 frame). The tracking positional error was improved from 2.27 ± 2.67 mm to 0.25 ± 0.24 mm by the tracking anomaly detection interlock function. Tracking positional inconsistency interlock signal was output within 5.0 ms. The gate on/off latency was <82.7 ± 7.6 ms. The gating control system latency was <3.1 ± 1.0 ms. The beam irradiation latency was <8.7 ± 1.2 ms. CONCLUSIONS: Our markerless tracking system is now ready for clinical use. We hope to shorten the computation time needed by the ML algorithm at 30 fps in the future.


Asunto(s)
Algoritmos , Fluoroscopía/métodos , Radioterapia de Iones Pesados , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/radioterapia , Fantasmas de Imagen , Errores de Configuración en Radioterapia/prevención & control , Sistemas de Computación , Humanos , Planificación de la Radioterapia Asistida por Computador/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA