Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Biomaterials ; 311: 122691, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38996673

RESUMO

Acoustic holography (AH), a promising approach for cell patterning, emerges as a powerful tool for constructing novel invitro 3D models that mimic organs and cancers features. However, understanding changes in cell function post-AH remains limited. Furthermore, replicating complex physiological and pathological processes solely with cell lines proves challenging. Here, we employed acoustical holographic lattice to assemble primary hepatocytes directly isolated from mice into a cell cluster matrix to construct a liver-shaped tissue sample. For the first time, we evaluated the liver functions of AH-patterned primary hepatocytes. The patterned model exhibited large numbers of self-assembled spheroids and superior multifarious core hepatocyte functions compared to cells in 2D and traditional 3D culture models. AH offers a robust protocol for long-term in vitro culture of primary cells, underscoring its potential for future applications in disease pathogenesis research, drug testing, and organ replacement therapy.

2.
Med Phys ; 2024 Jul 09.
Artigo em Inglês | MEDLINE | ID: mdl-38980065

RESUMO

BACKGROUND: Protoacoustic (PA) imaging has the potential to provide real-time 3D dose verification of proton therapy. However, PA images are susceptible to severe distortion due to limited angle acquisition. Our previous studies showed the potential of using deep learning to enhance PA images. As the model was trained using a limited number of patients' data, its efficacy was limited when applied to individual patients. PURPOSE: In this study, we developed a patient-specific deep learning method for protoacoustic imaging to improve the reconstruction quality of protoacoustic imaging and the accuracy of dose verification for individual patients. METHODS: Our method consists of two stages: in the first stage, a group model is trained from a diverse training set containing all patients, where a novel deep learning network is employed to directly reconstruct the initial pressure maps from the radiofrequency (RF) signals; in the second stage, we apply transfer learning on the pre-trained group model using patient-specific dataset derived from a novel data augmentation method to tune it into a patient-specific model. Raw PA signals were simulated based on computed tomography (CT) images and the pressure map derived from the planned dose. The reconstructed PA images were evaluated against the ground truth by using the root mean squared errors (RMSE), structural similarity index measure (SSIM) and gamma index on 10 specific prostate cancer patients. The significance level was evaluated by t-test with the p-value threshold of 0.05 compared with the results from the group model. RESULTS: The patient-specific model achieved an average RMSE of 0.014 ( p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.981 ( p < 0.05 ${{{p}}}<{0.05}$ ), out-performing the group model. Qualitative results also demonstrated that our patient-specific approach acquired better imaging quality with more details reconstructed when comparing with the group model. Dose verification achieved an average RMSE of 0.011 ( p < 0.05 ${{{p}}}<{0.05}$ ), and an average SSIM of 0.995 ( p < 0.05 ${{{p}}}<{0.05}$ ). Gamma index evaluation demonstrated a high agreement (97.4% [ p < 0.05 ${{{p}}}<{0.05}$ ] and 97.9% [ p < 0.05 ${{{p}}}<{0.05}$ ] for 1%/3  and 1%/5 mm) between the predicted and the ground truth dose maps. Our approach approximately took 6 s to reconstruct PA images for each patient, demonstrating its feasibility for online 3D dose verification for prostate proton therapy. CONCLUSIONS: Our method demonstrated the feasibility of achieving 3D high-precision PA-based dose verification using patient-specific deep-learning approaches, which can potentially be used to guide the treatment to mitigate the impact of range uncertainty and improve the precision. Further studies are needed to validate the clinical impact of the technique.

3.
Biomaterials ; 311: 122681, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38944968

RESUMO

Cell-laden bioprinting is a promising biofabrication strategy for regenerating bioactive transplants to address organ donor shortages. However, there has been little success in reproducing transplantable artificial organs with multiple distinctive cell types and physiologically relevant architecture. In this study, an omnidirectional printing embedded network (OPEN) is presented as a support medium for embedded 3D printing. The medium is state-of-the-art due to its one-step preparation, fast removal, and versatile ink compatibility. To test the feasibility of OPEN, exceptional primary mouse hepatocytes (PMHs) and endothelial cell line-C166, were used to print hepatospheroid-encapsulated-artificial livers (HEALs) with vein structures following predesigned anatomy-based printing paths in OPEN. PMHs self-organized into hepatocyte spheroids within the ink matrix, whereas the entire cross-linked structure remained intact for a minimum of ten days of cultivation. Cultivated HEALs maintained mature hepatic functions and marker gene expression at a higher level than conventional 2D and 3D conditions in vitro. HEALs with C166-laden vein structures promoted endogenous neovascularization in vivo compared with hepatospheroid-only liver prints within two weeks of transplantation. Collectively, the proposed platform enables the manufacture of bioactive tissues or organs resembling anatomical architecture, and has broad implications for liver function replacement in clinical applications.

4.
Med Phys ; 2024 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-38922912

RESUMO

Cone-beam CT (CBCT) is the most commonly used onboard imaging technique for target localization in radiation therapy. Conventional 3D CBCT acquires x-ray cone-beam projections at multiple angles around the patient to reconstruct 3D images of the patient in the treatment room. However, despite its wide usage, 3D CBCT is limited in imaging disease sites affected by respiratory motions or other dynamic changes within the body, as it lacks time-resolved information. To overcome this limitation, 4D-CBCT was developed to incorporate a time dimension in the imaging to account for the patient's motion during the acquisitions. For example, respiration-correlated 4D-CBCT divides the breathing cycles into different phase bins and reconstructs 3D images for each phase bin, ultimately generating a complete set of 4D images. 4D-CBCT is valuable for localizing tumors in the thoracic and abdominal regions where the localization accuracy is affected by respiratory motions. This is especially important for hypofractionated stereotactic body radiation therapy (SBRT), which delivers much higher fractional doses in fewer fractions than conventional fractionated treatments. Nonetheless, 4D-CBCT does face certain limitations, including long scanning times, high imaging doses, and compromised image quality due to the necessity of acquiring sufficient x-ray projections for each respiratory phase. In order to address these challenges, numerous methods have been developed to achieve fast, low-dose, and high-quality 4D-CBCT. This paper aims to review the technical developments surrounding 4D-CBCT comprehensively. It will explore conventional algorithms and recent deep learning-based approaches, delving into their capabilities and limitations. Additionally, the paper will discuss the potential clinical applications of 4D-CBCT and outline a future roadmap, highlighting areas for further research and development. Through this exploration, the readers will better understand 4D-CBCT's capabilities and potential to enhance radiation therapy.

5.
Phys Med Biol ; 69(8)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38471184

RESUMO

Objective. Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which impairs its accuracy for dose verification. In this study, we developed a hybrid-supervised deep learning method for protoacoustic imaging to address the limited view issue.Approach. We proposed a Recon-Enhance two-stage deep learning method. In the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from raw acoustic signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification.Main results. The results evaluated on a dataset of 126 prostate cancer patients achieved an average root mean squared errors (RMSE) of 0.0292, and an average structural similarity index measure (SSIM) of 0.9618, out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 s, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.Significance. Our study achieved start-of-the-art performance in the challenging task of direct reconstruction from radiofrequency signals, demonstrating the great promise of PA imaging as a highly efficient and accurate tool forinvivo3D proton dose verification to minimize the range uncertainties of proton therapy to improve its precision and outcomes.


Assuntos
Aprendizado Profundo , Terapia com Prótons , Masculino , Humanos , Prótons , Imageamento Tridimensional , Próstata , Processamento de Imagem Assistida por Computador/métodos
6.
Adv Sci (Weinh) ; 11(2): e2304460, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37973557

RESUMO

Methods accurately predicting the responses of colorectal cancer (CRC) and colorectal cancer liver metastasis (CRLM) to personalized chemotherapy remain limited due to tumor heterogeneity. This study introduces an innovative patient-derived CRC and CRLM tumor model for preclinical investigation, utilizing 3d-bioprinting (3DP) technology. Efficient construction of homogeneous in vitro 3D models of CRC/CRLM is achieved through the application of patient-derived primary tumor cells and 3D bioprinting with bioink. Genomic and histological analyses affirm that the CRC/CRLM 3DP tumor models effectively retain parental tumor biomarkers and mutation profiles. In vitro tests evaluating chemotherapeutic drug sensitivities reveal substantial tumor heterogeneity in chemotherapy responses within the 3DP CRC/CRLM models. Furthermore, a robust correlation is evident between the drug response in the CRLM 3DP model and the clinical outcomes of neoadjuvant chemotherapy. These findings imply a significant potential for the application of patient-derived 3DP cancer models in precision chemotherapy prediction and preclinical research for CRC/CRLM.


Assuntos
Bioimpressão , Neoplasias Colorretais , Neoplasias Hepáticas , Humanos , Neoplasias Colorretais/patologia , Prognóstico , Neoplasias Hepáticas/genética
7.
Phys Med Biol ; 68(23)2023 Nov 29.
Artigo em Inglês | MEDLINE | ID: mdl-37820684

RESUMO

Radiation-induced acoustic (RA) imaging is a promising technique for visualizing the invisible radiation energy deposition in tissues, enabling new imaging modalities and real-time therapy monitoring. However, RA imaging signal often suffers from poor signal-to-noise ratios (SNRs), thus requiring measuring hundreds or even thousands of frames for averaging to achieve satisfactory quality. This repetitive measurement increases ionizing radiation dose and degrades the temporal resolution of RA imaging, limiting its clinical utility. In this study, we developed a general deep inception convolutional neural network (GDI-CNN) to denoise RA signals to substantially reduce the number of frames needed for averaging. The network employs convolutions with multiple dilations in each inception block, allowing it to encode and decode signal features with varying temporal characteristics. This design generalizes GDI-CNN to denoise acoustic signals resulting from different radiation sources. The performance of the proposed method was evaluated using experimental data of x-ray-induced acoustic, protoacoustic, and electroacoustic signals both qualitatively and quantitatively. Results demonstrated the effectiveness of GDI-CNN: it achieved x-ray-induced acoustic image quality comparable to 750-frame-averaged results using only 10-frame-averaged measurements, reducing the imaging dose of x-ray-acoustic computed tomography (XACT) by 98.7%; it realized proton range accuracy parallel to 1500-frame-averaged results using only 20-frame-averaged measurements, improving the range verification frequency in proton therapy from 0.5 to 37.5 Hz; it reached electroacoustic image quality comparable to 750-frame-averaged results using only a single frame signal, increasing the electric field monitoring frequency from 1 fps to 1k fps. Compared to lowpass filter-based denoising, the proposed method demonstrated considerably lower mean-squared-errors, higher peak-SNR, and higher structural similarities with respect to the corresponding high-frame-averaged measurements. The proposed deep learning-based denoising framework is a generalized method for few-frame-averaged acoustic signal denoising, which significantly improves the RA imaging's clinical utilities for low-dose imaging and real-time therapy monitoring.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Acústica , Processamento de Imagem Assistida por Computador/métodos
8.
ArXiv ; 2023 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-37608936

RESUMO

Protoacoustic imaging showed great promise in providing real-time 3D dose verification of proton therapy. However, the limited acquisition angle in protoacoustic imaging induces severe artifacts, which significantly impairs its accuracy for dose verification. In this study, we developed a deep learning method with a Recon- Enhance two-stage strategy for protoacoustic imaging to address the limited view issue. Specifically, in the Recon-stage, a transformer-based network was developed to reconstruct initial pressure maps from radiofrequency signals. The network is trained in a hybrid-supervised approach, where it is first trained using supervision by the iteratively reconstructed pressure map and then fine-tuned using transfer learning and self-supervision based on the data fidelity constraint. In the Enhance-stage, a 3D U-net is applied to further enhance the image quality with supervision from the ground truth pressure map. The final protoacoustic images are then converted to dose for proton verification. The results evaluated on a dataset of 126 prostate cancer patients achieved an average RMSE of 0.0292, and an average SSIM of 0.9618, significantly out-performing related start-of-the-art methods. Qualitative results also demonstrated that our approach addressed the limit-view issue with more details reconstructed. Dose verification achieved an average RMSE of 0.018, and an average SSIM of 0.9891. Gamma index evaluation demonstrated a high agreement (94.7% and 95.7% for 1%/3 mm and 1%/5 mm) between the predicted and the ground truth dose maps. Notably, the processing time was reduced to 6 seconds, demonstrating its feasibility for online 3D dose verification for prostate proton therapy.

9.
ArXiv ; 2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37163138

RESUMO

Radiation-induced acoustic (RA) imaging is a promising technique for visualizing radiation energy deposition in tissues, enabling new imaging modalities and real-time therapy monitoring. However, it requires measuring hundreds or even thousands of averages to achieve satisfactory signal-to-noise ratios (SNRs). This repetitive measurement increases ionizing radiation dose and degrades the temporal resolution of RA imaging, limiting its clinical utility. In this study, we developed a general deep inception convolutional neural network (GDI-CNN) to denoise RA signals to substantially reduce the number of averages. The multi-dilation convolutions in the network allow for encoding and decoding signal features with varying temporal characteristics, making the network generalizable to signals from different radiation sources. The proposed method was evaluated using experimental data of X-ray-induced acoustic, protoacoustic, and electroacoustic signals, qualitatively and quantitatively. Results demonstrated the effectiveness and generalizability of GDI-CNN: for all the enrolled RA modalities, GDI-CNN achieved comparable SNRs to the fully-averaged signals using less than 2% of the averages, significantly reducing imaging dose and improving temporal resolution. The proposed deep learning framework is a general method for few-frame-averaged acoustic signal denoising, which significantly improves RA imaging's clinical utilities for low-dose imaging and real-time therapy monitoring.

10.
Sci Total Environ ; 882: 163326, 2023 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-37030361

RESUMO

Sewage sludge (SS) contains a certain amount of nitrogen (N), resulting in various content of N in the pyrolysis products. Investigates on how to control the generation of NH3 and HCN (deleterious gas-N species) or convert it to N2 and maximize transforming N in sewage sludge (SS-N) into potentially valuable N-containing products (such as char-N and/or liquid-N) are of great significance for SS management. Understanding the nitrogen migration and transformation (NMT) mechanisms in SS during the pyrolysis process is essential for investigating the aforementioned issues. Therefore, in this review, the N content and species in SS are summarized, and the influencing factors during the SS pyrolysis process (such as temperature, minerals, atmosphere, and heating rate) that affect NMT in char, gas, and liquid products are analyzed. Furthermore, N control strategies in SS pyrolysis products are proposed toward environmental and economic sustainability. Finally, the state-of-the-art of current research and future prospects are summarized, with a focus on the generation of value-added liquid-N and char-N products, while concurrently reducing NOx emission.

11.
Phys Med Biol ; 68(7)2023 03 20.
Artigo em Inglês | MEDLINE | ID: mdl-36848674

RESUMO

Background and objective. Range uncertainty is a major concern affecting the delivery precision in proton therapy. The Compton camera (CC)-based prompt-gamma (PG) imaging is a promising technique to provide 3Din vivorange verification. However, the conventional back-projected PG images suffer from severe distortions due to the limited view of the CC, significantly limiting its clinical utility. Deep learning has demonstrated effectiveness in enhancing medical images from limited-view measurements. But different from other medical images with abundant anatomical structures, the PGs emitted along the path of a proton pencil beam take up an extremely low portion of the 3D image space, presenting both the attention and the imbalance challenge for deep learning. To solve these issues, we proposed a two-tier deep learning-based method with a novel weighted axis-projection loss to generate precise 3D PG images to achieve accurate proton range verification.Materials and methods: the proposed method consists of two models: first, a localization model is trained to define a region-of-interest (ROI) in the distorted back-projected PG image that contains the proton pencil beam; second, an enhancement model is trained to restore the true PG emissions with additional attention on the ROI. In this study, we simulated 54 proton pencil beams (energy range: 75-125 MeV, dose level: 1 × 109protons/beam and 3 × 108protons/beam) delivered at clinical dose rates (20 kMU min-1and 180 kMU min-1) in a tissue-equivalent phantom using Monte-Carlo (MC). PG detection with a CC was simulated using the MC-Plus-Detector-Effects model. Images were reconstructed using the kernel-weighted-back-projection algorithm, and were then enhanced by the proposed method.Results. The method effectively restored the 3D shape of the PG images with the proton pencil beam range clearly visible in all testing cases. Range errors were within 2 pixels (4 mm) in all directions in most cases at a higher dose level. The proposed method is fully automatic, and the enhancement takes only ∼0.26 s.Significance. Overall, this preliminary study demonstrated the feasibility of the proposed method to generate accurate 3D PG images using a deep learning framework, providing a powerful tool for high-precisionin vivorange verification of proton therapy.


Assuntos
Aprendizado Profundo , Terapia com Prótons , Terapia com Prótons/métodos , Prótons , Estudos de Viabilidade , Processamento de Imagem Assistida por Computador/métodos , Raios gama , Imageamento Tridimensional , Imagens de Fantasmas , Método de Monte Carlo
12.
13.
Phys Med Biol ; 67(21)2022 10 27.
Artigo em Inglês | MEDLINE | ID: mdl-36206745

RESUMO

Dose delivery uncertainty is a major concern in proton therapy, adversely affecting the treatment precision and outcome. Recently, a promising technique, proton-acoustic (PA) imaging, has been developed to provide real-timein vivo3D dose verification. However, its dosimetry accuracy is limited due to the limited-angle view of the ultrasound transducer. In this study, we developed a deep learning-based method to address the limited-view issue in the PA reconstruction. A deep cascaded convolutional neural network (DC-CNN) was proposed to reconstruct 3D high-quality radiation-induced pressures using PA signals detected by a matrix array, and then derive precise 3D dosimetry from pressures for dose verification in proton therapy. To validate its performance, we collected 81 prostate cancer patients' proton therapy treatment plans. Dose was calculated using the commercial software RayStation and was normalized to the maximum dose. The PA simulation was performed using the open-source k-wave package. A matrix ultrasound array with 64 × 64 sensors and 500 kHz central frequency was simulated near the perineum to acquire radiofrequency (RF) signals during dose delivery. For realistic acoustic simulations, tissue heterogeneity and attenuation were considered, and Gaussian white noise was added to the acquired RF signals. The proposed DC-CNN was trained on 204 samples from 69 patients and tested on 26 samples from 12 other patients. Predicted 3D pressures and dose maps were compared against the ground truth qualitatively and quantitatively using root-mean-squared-error (RMSE), gamma-index (GI), and dice coefficient of isodose lines. Results demonstrated that the proposed method considerably improved the limited-view PA image quality, reconstructing pressures with clear and accurate structures and deriving doses with a high agreement with the ground truth. Quantitatively, the pressure accuracy achieved an RMSE of 0.061, and the dose accuracy achieved an RMSE of 0.044, GI (3%/3 mm) of 93.71%, and 90%-isodose line dice of 0.922. The proposed method demonstrates the feasibility of achieving high-quality quantitative 3D dosimetry in PA imaging using a matrix array, which potentially enables the online 3D dose verification for prostate proton therapy.


Assuntos
Aprendizado Profundo , Terapia com Prótons , Masculino , Humanos , Terapia com Prótons/métodos , Prótons , Próstata , Acústica , Imagens de Fantasmas
14.
Med Phys ; 49(10): 6461-6476, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35713411

RESUMO

BACKGROUND: Although four-dimensional cone-beam computed tomography (4D-CBCT) is valuable to provide onboard image guidance for radiotherapy of moving targets, it requires a long acquisition time to achieve sufficient image quality for target localization. To improve the utility, it is highly desirable to reduce the 4D-CBCT scanning time while maintaining high-quality images. Current motion-compensated methods are limited by slow speed and compensation errors due to the severe intraphase undersampling. PURPOSE: In this work, we aim to propose an alternative feature-compensated method to realize the fast 4D-CBCT with high-quality images. METHODS: We proposed a feature-compensated deformable convolutional network (FeaCo-DCN) to perform interphase compensation in the latent feature space, which has not been explored by previous studies. In FeaCo-DCN, encoding networks extract features from each phase, and then, features of other phases are deformed to those of the target phase via deformable convolutional networks. Finally, a decoding network combines and decodes features from all phases to yield high-quality images of the target phase. The proposed FeaCo-DCN was evaluated using lung cancer patient data. RESULTS: (1) FeaCo-DCN generated high-quality images with accurate and clear structures for a fast 4D-CBCT scan; (2) 4D-CBCT images reconstructed by FeaCo-DCN achieved 3D tumor localization accuracy within 2.5 mm; (3) image reconstruction is nearly real time; and (4) FeaCo-DCN achieved superior performance by all metrics compared to the top-ranked techniques in the AAPM SPARE Challenge. CONCLUSION: The proposed FeaCo-DCN is effective and efficient in reconstructing 4D-CBCT while reducing about 90% of the scanning time, which can be highly valuable for moving target localization in image-guided radiotherapy.


Assuntos
Tomografia Computadorizada de Feixe Cônico , Neoplasias Pulmonares , Algoritmos , Tomografia Computadorizada de Feixe Cônico/métodos , Tomografia Computadorizada Quadridimensional/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/radioterapia , Imagens de Fantasmas
15.
Biofabrication ; 14(3)2022 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-35705061

RESUMO

Embedded freeform writing addresses the contradiction between the material printability and biocompatibility for conventional extrusion-based bioprinting. However, the existing embedding mediums have limitations concerning the restricted printing temperature window, compatibility with bioinks or crosslinkers, and difficulties on medium removal. This work demonstrates a new embedding medium to meet the above demands, which composes of hydrophobically modified hydroxypropylmethyl cellulose and Pluronic F-127. The adjustable hydrophobic and hydrophilic associations between the components permit tunable thermoresponsive rheological properties, providing a programmable printing window. These associations are hardly compromised by additives without strong hydrophilic groups, which means it is compatible with the majority of bioink choices. We use polyethylene glycol 400, a strong hydrophilic polymer, to facilitate easy medium removal. The proposed medium enables freeform writing of the millimetric complex tubular structures with great shape fidelity and cell viability. Moreover, five bioinks with up to five different crosslinking methods are patterned into arbitrary geometries in one single medium, demonstrating its potential in heterogeneous tissue regeneration. Utilizing the rheological properties of the medium, an enhanced adhesion writing method is developed to optimize the structure's strand-to-strand adhesion. In summary, this versatile embedding medium provides excellent compatibility with multi-crosslinking methods and a tunable printing window, opening new opportunities for heterogeneous tissue regeneration.


Assuntos
Bioimpressão , Sobrevivência Celular , Impressão Tridimensional , Reologia , Engenharia Tecidual , Alicerces Teciduais/química
16.
IEEE Trans Radiat Plasma Med Sci ; 6(2): 189-199, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35386934

RESUMO

Purpose: To investigate the feasibility of tracking targets in 2D fluor images using a novel deep learning network. Methods: Our model design aims to capture the consistent motion of tumors in fluoroscopic images by neural network. Specifically, the model is trained by generative adversarial methods. The network is a coarse-to-fine architecture design. Convolutional LSTM (Long Short-term Memory) modules are introduced to account for the time correlation between different frames of the fluoroscopic images. The model was trained and tested on a digital X-CAT phantom in two studies. Series of coherent 2D fluoroscopic images representing the full respiration cycle were fed into the model to predict the localized tumor regions. In first study to test on massive scenarios, phantoms of different scales, tumor positions, sizes, and respiration amplitudes were generated to evaluate the accuracy of the model comprehensively. In second study to test on specific sample, phantoms were generated with fixed body and tumor sizes but different respiration amplitudes to investigate the effects of motion amplitude on the tracking accuracy. The tracking accuracy was quantitatively evaluated using intersection over union (IOU), tumor area difference, and centroid of mass difference (COMD). Results: In the first comprehensive study, the mean IOU and dice coefficient achieved 0.93±0.04 and 0.96±0.02. The mean tumor area difference was 4.34%±4.04%. And the COMD was 0.16 cm and 0.07 cm on average in SI (superior-interior) and LR (left-right) directions, respectively. In the second amplitude study, the mean IOU and dice coefficient achieved 0.98 and 0.99. The mean tumor difference was 0.17%. And the COMD was 0.03cm and 0.01 cm on average in SI and LR directions, respectively. Results demonstrated the robustness of our model against breathing variations. Conclusion: Our study showed the feasibility of using deep learning to track targets in x-ray fluoroscopic projection images without the aid of markers. The technique can be valuable for both pre- and during-treatment real-time target verification using fluoroscopic imaging in lung SBRT treatments.

17.
IEEE Trans Radiat Plasma Med Sci ; 6(2): 222-230, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35386935

RESUMO

4D-CBCT is a powerful tool to provide respiration-resolved images for the moving target localization. However, projections in each respiratory phase are intrinsically under-sampled under the clinical scanning time and imaging dose constraints. Images reconstructed by compressed sensing (CS)-based methods suffer from blurred edges. Introducing the average-4D-image constraint to the CS-based reconstruction, such as prior-image-constrained CS (PICCS), can improve the edge sharpness of the stable structures. However, PICCS can lead to motion artifacts in the moving regions. In this study, we proposed a dual-encoder convolutional neural network (DeCNN) to realize the average-image-constrained 4D-CBCT reconstruction. The proposed DeCNN has two parallel encoders to extract features from both the under-sampled target phase images and the average images. The features are then concatenated and fed into the decoder for the high-quality target phase image reconstruction. The reconstructed 4D-CBCT using of the proposed DeCNN from the real lung cancer patient data showed (1) qualitatively, clear and accurate edges for both stable and moving structures; (2) quantitatively, low-intensity errors, high peak signal-to-noise ratio, and high structural similarity compared to the ground truth images; and (3) superior quality to those reconstructed by several other state-of-the-art methods including the back-projection, CS total-variation, PICCS, and the single-encoder CNN. Overall, the proposed DeCNN is effective in exploiting the average-image constraint to improve the 4D-CBCT image quality.

18.
Phys Med Biol ; 67(8)2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35313293

RESUMO

Objective.4D-CBCT provides phase-resolved images valuable for radiomics analysis for outcome prediction throughout treatment courses. However, 4D-CBCT suffers from streak artifacts caused by under-sampling, which severely degrades the accuracy of radiomic features. Previously we developed group-patient-trained deep learning methods to enhance the 4D-CBCT quality for radiomics analysis, which was not optimized for individual patients. In this study, a patient-specific model was developed to further improve the accuracy of 4D-CBCT based radiomics analysis for individual patients.Approach.This patient-specific model was trained with intra-patient data. Specifically, patient planning 4D-CT was augmented through image translation, rotation, and deformation to generate 305 CT volumes from 10 volumes to simulate possible patient positions during the onboard image acquisition. 72 projections were simulated from 4D-CT for each phase and were used to reconstruct 4D-CBCT using FDK back-projection algorithm. The patient-specific model was trained using these 305 paired sets of patient-specific 4D-CT and 4D-CBCT data to enhance the 4D-CBCT image to match with 4D-CT images as ground truth. For model testing, 4D-CBCT were simulated from a separate set of 4D-CT scan images acquired from the same patient and were then enhanced by this patient-specific model. Radiomics features were then extracted from the testing 4D-CT, 4D-CBCT, and enhanced 4D-CBCT image sets for comparison. The patient-specific model was tested using 4 lung-SBRT patients' data and compared with the performance of the group-based model. The impact of model dimensionality, region of interest (ROI) selection, and loss function on the model accuracy was also investigated.Main results.Compared with a group-based model, the patient-specific training model further improved the accuracy of radiomic features, especially for features with large errors in the group-based model. For example, the 3D whole-body and ROI loss-based patient-specific model reduces the errors of the first-order median feature by 83.67%, the wavelet LLL feature maximum by 91.98%, and the wavelet HLL skewness feature by 15.0% on average for the four patients tested. In addition, the patient-specific models with different dimensionality (2D versus 3D) or loss functions (L1 versus L1 + VGG + GAN) achieved comparable results for improving the radiomics accuracy. Using whole-body or whole-body+ROI L1 loss for the model achieved better results than using the ROI L1 loss alone as the loss function.Significance.This study demonstrated that the patient-specific model is more effective than the group-based model on improving the accuracy of the 4D-CBCT radiomic features analysis, which could potentially improve the precision for outcome prediction in radiotherapy.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Tomografia Computadorizada de Feixe Cônico Espiral , Tomografia Computadorizada de Feixe Cônico/métodos , Tomografia Computadorizada Quadridimensional/métodos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/radioterapia , Imagens de Fantasmas
19.
ACS Nano ; 16(2): 3300-3310, 2022 02 22.
Artigo em Inglês | MEDLINE | ID: mdl-35099174

RESUMO

Pathogenic biofilms are up to 1000-fold more drug-resistant than planktonic pathogens and cause about 80% of all chronic infections worldwide. The lack of prompt and reliable biofilm identification methods seriously prohibits the diagnosis and treatment of biofilm infections. Here, we developed a machine-learning-aided cocktail assay for prompt and reliable biofilm detection. Lanthanide nanoparticles with different emissions, surface charges, and hydrophilicity are formulated into the cocktail kits. The lanthanide nanoparticles in the cocktail kits can offer competitive interactions with the biofilm and further maximize the charge and hydrophilicity differences between biofilms. The physicochemical heterogeneities of biofilms were transformed into luminescence intensity at different wavelengths by the cocktail kits. The luminescence signals were used as learning data to train the random forest algorithm, and the algorithm could identify the unknown biofilms within minutes after training. Electrostatic attractions and hydrophobic-hydrophobic interactions were demonstrated to dominate the binding of the cocktail kits to the biofilms. By rationally designing the charge and hydrophilicity of the cocktail kit, unknown biofilms of pathogenic clinical isolates were identified with an overall accuracy of over 80% based on the random forest algorithm. Moreover, the antibiotic-loaded cocktail nanoprobes efficiently eradicated biofilms since the nanoprobes could penetrate deep into the biofilms. This work can serve as a reliable technique for the diagnosis of biofilm infections and it can also provide instructions for the design of multiplex assays for detecting biochemical compounds beyond biofilms.


Assuntos
Infecções Bacterianas , Elementos da Série dos Lantanídeos , Nanopartículas Metálicas , Antibacterianos/química , Biofilmes , Humanos , Aprendizado de Máquina , Testes de Sensibilidade Microbiana
20.
Precis Radiat Oncol ; 6(2): 110-118, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37064765

RESUMO

Objective: Despite its prevalence, cone beam computed tomography (CBCT) has poor soft-tissue contrast, making it challenging to localize liver tumors. We propose a patient-specific deep learning model to generate synthetic magnetic resonance imaging (MRI) from CBCT to improve tumor localization. Methods: A key innovation is using patient-specific CBCT-MRI image pairs to train a deep learning model to generate synthetic MRI from CBCT. Specifically, patient planning CT was deformably registered to prior MRI, and then used to simulate CBCT with simulated projections and Feldkamp, Davis, and Kress reconstruction. These CBCT-MRI images were augmented using translations and rotations to generate enough patient-specific training data. A U-Net-based deep learning model was developed and trained to generate synthetic MRI from CBCT in the liver, and then tested on a different CBCT dataset. Synthetic MRIs were quantitatively evaluated against ground-truth MRI. Results: The synthetic MRI demonstrated superb soft-tissue contrast with clear tumor visualization. On average, the synthetic MRI achieved 28.01, 0.025, and 0.929 for peak signal-to-noise ratio, mean square error, and structural similarity index, respectively, outperforming CBCT images. The model performance was consistent across all three patients tested. Conclusion: Our study demonstrated the feasibility of a patient-specific model to generate synthetic MRI from CBCT for liver tumor localization, opening up a potential to democratize MRI guidance in clinics with conventional LINACs.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...