Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.916
Filter
1.
J Biomed Opt ; 29(7): 070901, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39006312

ABSTRACT

Significance: Photoacoustic computed tomography (PACT), a hybrid imaging modality combining optical excitation with acoustic detection, has rapidly emerged as a prominent biomedical imaging technique. Aim: We review the challenges and advances of PACT, including (1) limited view, (2) anisotropy resolution, (3) spatial aliasing, (4) acoustic heterogeneity (speed of sound mismatch), and (5) fluence correction of spectral unmixing. Approach: We performed a comprehensive literature review to summarize the key challenges in PACT toward practical applications and discuss various solutions. Results: There is a wide range of contributions from both industry and academic spaces. Various approaches, including emerging deep learning methods, are proposed to improve the performance of PACT further. Conclusions: We outline contemporary technologies aimed at tackling the challenges in PACT applications.


Subject(s)
Photoacoustic Techniques , Tomography, X-Ray Computed , Photoacoustic Techniques/methods , Humans , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods , Anisotropy , Deep Learning
2.
Insights Imaging ; 15(1): 167, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38971933

ABSTRACT

OBJECTIVES: Detection of liver metastases is crucial for guiding oncological management. Computed tomography through iterative reconstructions is widely used in this indication but has certain limitations. Deep learning image reconstructions (DLIR) use deep neural networks to achieve a significant noise reduction compared to iterative reconstructions. While reports have demonstrated improvements in image quality, their impact on liver metastases detection remains unclear. Our main objective was to determine whether DLIR affects the number of detected liver metastasis. Our secondary objective was to compare metastases conspicuity between the two reconstruction methods. METHODS: CT images of 121 patients with liver metastases were reconstructed using a 50% adaptive statistical iterative reconstruction (50%-ASiR-V), and three levels of DLIR (DLIR-low, DLIR-medium, and DLIR-high). For each reconstruction, two double-blinded radiologists counted up to a maximum of ten metastases. Visibility and contour definitions were also assessed. Comparisons between methods for continuous parameters were performed using mixed models. RESULTS: A higher number of metastases was detected by one reader with DLIR-high: 7 (2-10) (median (Q1-Q3); total 733) versus 5 (2-10), respectively for DLIR-medium, DLIR-low, and ASiR-V (p < 0.001). Ten patents were detected with more metastases with DLIR-high simultaneously by both readers and a third reader for confirmation. Metastases visibility and contour definition were better with DLIR than ASiR-V. CONCLUSION: DLIR-high enhanced the detection and visibility of liver metastases compared to ASiR-V, and also increased the number of liver metastases detected. CRITICAL RELEVANCE STATEMENT: Deep learning-based reconstruction at high strength allowed an increase in liver metastases detection compared to hybrid iterative reconstruction and can be used in clinical oncology imaging to help overcome the limitations of CT. KEY POINTS: Detection of liver metastases is crucial but limited with standard CT reconstructions. More liver metastases were detected with deep-learning CT reconstruction compared to iterative reconstruction. Deep learning reconstructions are suitable for hepatic metastases staging and follow-up.

3.
Photoacoustics ; 38: 100618, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38957484

ABSTRACT

Photoacoustic tomography (PAT), as a novel medical imaging technology, provides structural, functional, and metabolism information of biological tissue in vivo. Sparse Sampling PAT, or SS-PAT, generates images with a smaller number of detectors, yet its image reconstruction is inherently ill-posed. Model-based methods are the state-of-the-art method for SS-PAT image reconstruction, but they require design of complex handcrafted prior. Owing to their ability to derive robust prior from labeled datasets, deep-learning-based methods have achieved great success in solving inverse problems, yet their interpretability is poor. Herein, we propose a novel SS-PAT image reconstruction method based on deep algorithm unrolling (DAU), which integrates the advantages of model-based and deep-learning-based methods. We firstly provide a thorough analysis of DAU for PAT reconstruction. Then, in order to incorporate the structural prior constraint, we propose a nested DAU framework based on plug-and-play Alternating Direction Method of Multipliers (PnP-ADMM) to deal with the sparse sampling problem. Experimental results on numerical simulation, in vivo animal imaging, and multispectral un-mixing demonstrate that the proposed DAU image reconstruction framework outperforms state-of-the-art model-based and deep-learning-based methods.

4.
F1000Res ; 13: 691, 2024.
Article in English | MEDLINE | ID: mdl-38962692

ABSTRACT

Background: Non-contrast Computed Tomography (NCCT) plays a pivotal role in assessing central nervous system disorders and is a crucial diagnostic method. Iterative reconstruction (IR) methods have enhanced image quality (IQ) but may result in a blotchy appearance and decreased resolution for subtle contrasts. The deep-learning image reconstruction (DLIR) algorithm, which integrates a convolutional neural network (CNN) into the reconstruction process, generates high-quality images with minimal noise. Hence, the objective of this study was to assess the IQ of the Precise Image (DLIR) and the IR technique (iDose 4) for the NCCT brain. Methods: This is a prospective study. Thirty patients who underwent NCCT brain were included. The images were reconstructed using DLIR-standard and iDose 4. Qualitative IQ analysis parameters, such as overall image quality (OQ), subjective image noise (SIN), and artifacts, were measured. Quantitative IQ analysis parameters such as Computed Tomography (CT) attenuation (HU), image noise (IN), posterior fossa index (PFI), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in the basal ganglia (BG) and centrum-semiovale (CSO) were measured. Paired t-tests were performed for qualitative and quantitative IQ analyses between the iDose 4 and DLIR-standard. Kappa statistics were used to assess inter-observer agreement for qualitative analysis. Results: Quantitative IQ analysis showed significant differences (p<0.05) in IN, SNR, and CNR between the iDose 4 and DLIR-standard at the BG and CSO levels. IN was reduced (41.8-47.6%), SNR (65-82%), and CNR (68-78.8%) were increased with DLIR-standard. PFI was reduced (27.08%) the DLIR-standard. Qualitative IQ analysis showed significant differences (p<0.05) in OQ, SIN, and artifacts between the DLIR standard and iDose 4. The DLIR standard showed higher qualitative IQ scores than the iDose 4. Conclusion: DLIR standard yielded superior quantitative and qualitative IQ compared to the IR technique (iDose4). The DLIR-standard significantly reduced the IN and artifacts compared to iDose 4 in the NCCT brain.


Subject(s)
Brain , Deep Learning , Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Pilot Projects , Female , Tomography, X-Ray Computed/methods , Male , Prospective Studies , Middle Aged , Brain/diagnostic imaging , Adult , Image Processing, Computer-Assisted/methods , Aged , Signal-To-Noise Ratio , Algorithms
5.
Comput Biol Med ; 178: 108701, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38901186

ABSTRACT

Decoding visual representations from human brain activity has emerged as a thriving research domain, particularly in the context of brain-computer interfaces. Our study presents an innovative method that employs knowledge distillation to train an EEG classifier and reconstruct images from the ImageNet and THINGS-EEG 2 datasets using only electroencephalography (EEG) data from participants who have viewed the images themselves (i.e. "brain decoding"). We analyzed EEG recordings from 6 participants for the ImageNet dataset and 10 for the THINGS-EEG 2 dataset, exposed to images spanning unique semantic categories. These EEG readings were converted into spectrograms, which were then used to train a convolutional neural network (CNN), integrated with a knowledge distillation procedure based on a pre-trained Contrastive Language-Image Pre-Training (CLIP)-based image classification teacher network. This strategy allowed our model to attain a top-5 accuracy of 87%, significantly outperforming a standard CNN and various RNN-based benchmarks. Additionally, we incorporated an image reconstruction mechanism based on pre-trained latent diffusion models, which allowed us to generate an estimate of the images that had elicited EEG activity. Therefore, our architecture not only decodes images from neural activity but also offers a credible image reconstruction from EEG only, paving the way for, e.g., swift, individualized feedback experiments.

6.
Biomed J Sci Tech Res ; 55(2): 46779-46884, 2024.
Article in English | MEDLINE | ID: mdl-38883320

ABSTRACT

There are fewer than 10 projection views in extreme few-view tomography. The state-of-the-art methods to reconstruct images with few-view data are compressed sensing based. Compressed sensing relies on a sparsification transformation and total variation (TV) norm minimization. However, for the extreme few-view tomography, the compressed sensing methods are not powerful enough. This paper seeks additional information as extra constraints so that extreme few-view tomography becomes possible. In transmission tomography, we roughly know the linear attenuation coefficients of the objects to be imaged. We can use these values as extra constraints. Computer simulations show that these extra constraints are helpful and improve the reconstruction quality.

7.
BMC Med Imaging ; 24(1): 159, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38926711

ABSTRACT

BACKGROUND: To assess the improvement of image quality and diagnostic acceptance of thinner slice iodine maps enabled by deep learning image reconstruction (DLIR) in abdominal dual-energy CT (DECT). METHODS: This study prospectively included 104 participants with 136 lesions. Four series of iodine maps were generated based on portal-venous scans of contrast-enhanced abdominal DECT: 5-mm and 1.25-mm using adaptive statistical iterative reconstruction-V (Asir-V) with 50% blending (AV-50), and 1.25-mm using DLIR with medium (DLIR-M), and high strength (DLIR-H). The iodine concentrations (IC) and their standard deviations of nine anatomical sites were measured, and the corresponding coefficient of variations (CV) were calculated. Noise-power-spectrum (NPS) and edge-rise-slope (ERS) were measured. Five radiologists rated image quality in terms of image noise, contrast, sharpness, texture, and small structure visibility, and evaluated overall diagnostic acceptability of images and lesion conspicuity. RESULTS: The four reconstructions maintained the IC values unchanged in nine anatomical sites (all p > 0.999). Compared to 1.25-mm AV-50, 1.25-mm DLIR-M and DLIR-H significantly reduced CV values (all p < 0.001) and presented lower noise and noise peak (both p < 0.001). Compared to 5-mm AV-50, 1.25-mm images had higher ERS (all p < 0.001). The difference of the peak and average spatial frequency among the four reconstructions was relatively small but statistically significant (both p < 0.001). The 1.25-mm DLIR-M images were rated higher than the 5-mm and 1.25-mm AV-50 images for diagnostic acceptability and lesion conspicuity (all P < 0.001). CONCLUSIONS: DLIR may facilitate the thinner slice thickness iodine maps in abdominal DECT for improvement of image quality, diagnostic acceptability, and lesion conspicuity.


Subject(s)
Contrast Media , Deep Learning , Radiographic Image Interpretation, Computer-Assisted , Radiography, Abdominal , Radiography, Dual-Energy Scanned Projection , Tomography, X-Ray Computed , Humans , Prospective Studies , Female , Male , Middle Aged , Aged , Tomography, X-Ray Computed/methods , Radiographic Image Interpretation, Computer-Assisted/methods , Radiography, Abdominal/methods , Radiography, Dual-Energy Scanned Projection/methods , Adult , Iodine , Aged, 80 and over
8.
Phys Eng Sci Med ; 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38884668

ABSTRACT

This study aimed to evaluate the impact of radiation dose and focal spot size on the image quality of super-resolution deep-learning reconstruction (SR-DLR) in comparison with iterative reconstruction (IR) and normal-resolution DLR (NR-DLR) algorithms for cardiac CT. Catphan-700 phantom was scanned on a 320-row scanner at six radiation doses (small and large focal spots at 1.4-4.3 and 5.8-8.8 mGy, respectively). Images were reconstructed using hybrid-IR, model-based-IR, NR-DLR, and SR-DLR algorithms. Noise properties were evaluated through plotting noise power spectrum (NPS). Spatial resolution was quantified with task-based transfer function (TTF); Polystyrene, Delrin, and Bone-50% inserts were used for low-, intermediate, and high-contrast spatial resolution. The detectability index (d') was calculated. Image noise, noise texture, edge sharpness of low- and intermediate-contrast objects, delineation of fine high-contrast objects, and overall quality of four reconstructions were visually ranked. Results indicated that among four reconstructions, SR-DLR yielded the lowest noise magnitude and NPS peak, as well as the highest average NPS frequency, TTF50%, d' values, and visual rank at each radiation dose. For all reconstructions, the intermediate- to high-contrast spatial resolution was maximized at 4.3 mGy, while the lowest noise magnitude and highest d' were attained at 8.8 mGy. SR-DLR at 4.3 mGy exhibited superior noise performance, intermediate- to high-contrast spatial resolution, d' values, and visual rank compared to the other reconstructions at 8.8 mGy. Therefore, SR-DLR may yield superior diagnostic image quality and facilitate radiation dose reduction compared to the other reconstructions, particularly when combined with small focal spot scanning.

9.
Sci Rep ; 14(1): 13850, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38879679

ABSTRACT

Recently, ConvNeXt and blueprint separable convolution (BSConv) constructed from standard ConvNet modules have demonstrated competitive performance in advanced computer vision tasks. This paper proposes an efficient model (BCRN) based on BSConv and the ConvNeXt residual structure for single image super-resolution, which achieves superior performance with very low parametric numbers. Specifically, the residual block (BCB) of the BCRN utilizes the ConvNeXt residual structure and BSConv to significantly reduce the number of parameters. Within the residual block, enhanced spatial attention and contrast-aware channel attention modules are simultaneously introduced to prioritize valuable features within the network. Multiple residual blocks are then stacked to form the backbone network, with Dense connections utilized between them to enhance feature utilization. Our model boasts extremely low parameters compared to other state-of-the-art lightweight models, while experimental results on benchmark datasets demonstrate its excellent performance. The code will be available at https://github.com/kptx666/BCRN .

10.
Phys Med Biol ; 69(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38870947

ABSTRACT

Objective.Cone-beam computed tomography (CBCT) is widely used in image-guided radiotherapy. Reconstructing CBCTs from limited-angle acquisitions (LA-CBCT) is highly desired for improved imaging efficiency, dose reduction, and better mechanical clearance. LA-CBCT reconstruction, however, suffers from severe under-sampling artifacts, making it a highly ill-posed inverse problem. Diffusion models can generate data/images by reversing a data-noising process through learned data distributions; and can be incorporated as a denoiser/regularizer in LA-CBCT reconstruction. In this study, we developed a diffusion model-based framework, prior frequency-guided diffusion model (PFGDM), for robust and structure-preserving LA-CBCT reconstruction.Approach.PFGDM uses a conditioned diffusion model as a regularizer for LA-CBCT reconstruction, and the condition is based on high-frequency information extracted from patient-specific prior CT scans which provides a strong anatomical prior for LA-CBCT reconstruction. Specifically, we developed two variants of PFGDM (PFGDM-A and PFGDM-B) with different conditioning schemes. PFGDM-A applies the high-frequency CT information condition until a pre-optimized iteration step, and drops it afterwards to enable both similar and differing CT/CBCT anatomies to be reconstructed. PFGDM-B, on the other hand, continuously applies the prior CT information condition in every reconstruction step, while with a decaying mechanism, to gradually phase out the reconstruction guidance from the prior CT scans. The two variants of PFGDM were tested and compared with current available LA-CBCT reconstruction solutions, via metrics including peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM).Main results.PFGDM outperformed all traditional and diffusion model-based methods. The mean(s.d.) PSNR/SSIM were 27.97(3.10)/0.949(0.027), 26.63(2.79)/0.937(0.029), and 23.81(2.25)/0.896(0.036) for PFGDM-A, and 28.20(1.28)/0.954(0.011), 26.68(1.04)/0.941(0.014), and 23.72(1.19)/0.894(0.034) for PFGDM-B, based on 120°, 90°, and 30° orthogonal-view scan angles respectively. In contrast, the PSNR/SSIM was 19.61(2.47)/0.807(0.048) for 30° for DiffusionMBIR, a diffusion-based method without prior CT conditioning.Significance. PFGDM reconstructs high-quality LA-CBCTs under very-limited gantry angles, allowing faster and more flexible CBCT scans with dose reductions.


Subject(s)
Cone-Beam Computed Tomography , Image Processing, Computer-Assisted , Cone-Beam Computed Tomography/methods , Humans , Diffusion , Image Processing, Computer-Assisted/methods , Phantoms, Imaging
11.
Vis Comput Ind Biomed Art ; 7(1): 13, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38861067

ABSTRACT

Early diagnosis and accurate prognosis of colorectal cancer is critical for determining optimal treatment plans and maximizing patient outcomes, especially as the disease progresses into liver metastases. Computed tomography (CT) is a frontline tool for this task; however, the preservation of predictive radiomic features is highly dependent on the scanning protocol and reconstruction algorithm. We hypothesized that image reconstruction with a high-frequency kernel could result in a better characterization of liver metastases features via deep neural networks. This kernel produces images that appear noisier but preserve more sinogram information. A simulation pipeline was developed to study the effects of imaging parameters on the ability to characterize the features of liver metastases. This pipeline utilizes a fractal approach to generate a diverse population of shapes representing virtual metastases, and then it superimposes them on a realistic CT liver region to perform a virtual CT scan using CatSim. Datasets of 10,000 liver metastases were generated, scanned, and reconstructed using either standard or high-frequency kernels. These data were used to train and validate deep neural networks to recover crafted metastases characteristics, such as internal heterogeneity, edge sharpness, and edge fractal dimension. In the absence of noise, models scored, on average, 12.2% ( α = 0.012 ) and 7.5% ( α = 0.049 ) lower squared error for characterizing edge sharpness and fractal dimension, respectively, when using high-frequency reconstructions compared to standard. However, the differences in performance were statistically insignificant when a typical level of CT noise was simulated in the clinical scan. Our results suggest that high-frequency reconstruction kernels can better preserve information for downstream artificial intelligence-based radiomic characterization, provided that noise is limited. Future work should investigate the information-preserving kernels in datasets with clinical labels.

12.
Radiologie (Heidelb) ; 2024 Jun 12.
Article in German | MEDLINE | ID: mdl-38864874

ABSTRACT

CLINICAL/METHODICAL ISSUE: Magnetic resonance imaging (MRI) is a central component of musculoskeletal imaging. However, long image acquisition times can pose practical barriers in clinical practice. STANDARD RADIOLOGICAL METHODS: MRI is the established modality of choice in the diagnostic workup of injuries and diseases of the musculoskeletal system due to its high spatial resolution, excellent signal-to-noise ratio (SNR), and unparalleled soft tissue contrast. METHODOLOGICAL INNOVATIONS: Continuous advances in hardware and software technology over the last few decades have enabled four-fold acceleration of 2D turbo-spin-echo (TSE) without compromising image quality or diagnostic performance. The recent clinical introduction of deep learning (DL)-based image reconstruction algorithms helps to minimize further the interdependency between SNR, spatial resolution and image acquisition time and allows the use of higher acceleration factors. PERFORMANCE: The combined use of advanced acceleration techniques and DL-based image reconstruction holds enormous potential to maximize efficiency, patient comfort, access, and value of musculoskeletal MRI while maintaining excellent diagnostic accuracy. ACHIEVEMENTS: Accelerated MRI with DL-based image reconstruction has rapidly found its way into clinical practice and proven to be of added value. Furthermore, recent investigations suggest that the potential of this technology does not yet appear to be fully harvested. PRACTICAL RECOMMENDATIONS: Deep learning-reconstructed fast musculoskeletal MRI examinations can be reliably used for diagnostic work-up and follow-up of musculoskeletal pathologies in clinical practice.

13.
Magn Reson Med ; 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38860514

ABSTRACT

PURPOSE: Hyperpolarized 129Xe MRI benefits from non-Cartesian acquisitions that sample k-space efficiently and rapidly. However, their reconstructions are complex and burdened by decay processes unique to hyperpolarized gas. Currently used gridded reconstructions are prone to artifacts caused by magnetization decay and are ill-suited for undersampling. We present a compressed sensing (CS) reconstruction approach that incorporates magnetization decay in the forward model, thereby producing images with increased sharpness and contrast, even in undersampled data. METHODS: Radio-frequency, T1, and T 2 * $$ {\mathrm{T}}_2^{\ast } $$ decay processes were incorporated into the forward model and solved using iterative methods including CS. The decay-modeled reconstruction was validated in simulations and then tested in 2D/3D-spiral ventilation and 3D-radial gas-exchange MRI. Quantitative metrics including apparent-SNR and sharpness were compared between gridded, CS, and twofold undersampled CS reconstructions. Observations were validated in gas-exchange data collected from 15 healthy and 25 post-hematopoietic-stem-cell-transplant participants. RESULTS: CS reconstructions in simulations yielded images with threefold increases in accuracy. CS increased sharpness and contrast for ventilation in vivo imaging and showed greater accuracy for undersampled acquisitions. CS improved gas-exchange imaging, particularly in the dissolved-phase where apparent-SNR improved, and structure was made discernable. Finally, CS showed repeatability in important global gas-exchange metrics including median dissolved-gas signal ratio and median angle between real/imaginary components. CONCLUSION: A non-Cartesian CS reconstruction approach that incorporates hyperpolarized 129Xe decay processes is presented. This approach enables improved image sharpness, contrast, and overall image quality in addition to up-to threefold undersampling. This contribution benefits all hyperpolarized gas MRI through improved accuracy and decreased scan durations.

14.
Med Phys ; 2024 Jun 23.
Article in English | MEDLINE | ID: mdl-38922912

ABSTRACT

Cone-beam CT (CBCT) is the most commonly used onboard imaging technique for target localization in radiation therapy. Conventional 3D CBCT acquires x-ray cone-beam projections at multiple angles around the patient to reconstruct 3D images of the patient in the treatment room. However, despite its wide usage, 3D CBCT is limited in imaging disease sites affected by respiratory motions or other dynamic changes within the body, as it lacks time-resolved information. To overcome this limitation, 4D-CBCT was developed to incorporate a time dimension in the imaging to account for the patient's motion during the acquisitions. For example, respiration-correlated 4D-CBCT divides the breathing cycles into different phase bins and reconstructs 3D images for each phase bin, ultimately generating a complete set of 4D images. 4D-CBCT is valuable for localizing tumors in the thoracic and abdominal regions where the localization accuracy is affected by respiratory motions. This is especially important for hypofractionated stereotactic body radiation therapy (SBRT), which delivers much higher fractional doses in fewer fractions than conventional fractionated treatments. Nonetheless, 4D-CBCT does face certain limitations, including long scanning times, high imaging doses, and compromised image quality due to the necessity of acquiring sufficient x-ray projections for each respiratory phase. In order to address these challenges, numerous methods have been developed to achieve fast, low-dose, and high-quality 4D-CBCT. This paper aims to review the technical developments surrounding 4D-CBCT comprehensively. It will explore conventional algorithms and recent deep learning-based approaches, delving into their capabilities and limitations. Additionally, the paper will discuss the potential clinical applications of 4D-CBCT and outline a future roadmap, highlighting areas for further research and development. Through this exploration, the readers will better understand 4D-CBCT's capabilities and potential to enhance radiation therapy.

15.
Phys Med Biol ; 69(13)2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38843809

ABSTRACT

Objective. Image reconstruction is a fundamental step in magnetic particle imaging (MPI). One of the main challenges is the fact that the reconstructions are computationally intensive and time-consuming, so choosing an algorithm presents a compromise between accuracy and execution time, which depends on the application. This work proposes a method that provides both fast and accurate image reconstructions.Approach. Image reconstruction algorithms were implemented to be executed in parallel ingraphics processing units(GPUs) using the CUDA framework. The calculation of the model-based MPI calibration matrix was also implemented in GPU to allow both fast and flexible reconstructions.Main results. The parallel algorithms were able to accelerate the reconstructions by up to about6,100times in comparison to the serial Kaczmarz algorithm executed in the CPU, allowing for real-time applications. Reconstructions using the OpenMPIData dataset validated the proposed algorithms and demonstrated that they are able to provide both fast and accurate reconstructions. The calculation of the calibration matrix was accelerated by up to about 37 times.Significance. The parallel algorithms proposed in this work can provide single-frame MPI reconstructions in real time, with frame rates greater than 100 frames per second. The parallel calculation of the calibration matrix can be combined with the parallel reconstruction to deliver images in less time than the serial Kaczmarz reconstruction, potentially eliminating the need of storing the calibration matrix in the main memory, and providing the flexibility of redefining scanning and reconstruction parameters during execution.


Subject(s)
Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Algorithms , Computer Graphics , Time Factors , Molecular Imaging/methods , Calibration
16.
J Struct Biol ; 216(3): 108107, 2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38906499

ABSTRACT

Atomic force microscope enables ultra-precision imaging of living cells. However, atomic force microscope imaging is a complex and time-consuming process. The obtained images of living cells usually have low resolution and are easily influenced by noise leading to unsatisfactory imaging quality, obstructing the research and analysis based on cell images. Herein, an adaptive attention image reconstruction network based on residual encoder-decoder was proposed, through the combination of deep learning technology and atomic force microscope imaging supporting high-quality cell image acquisition. Compared with other learning-based methods, the proposed network showed higher peak signal-to-noise ratio, higher structural similarity and better image reconstruction performances. In addition, the cell images reconstructed by each method were used for cell recognition, and the cell images reconstructed by the proposed network had the highest cell recognition rate. The proposed network has brought insights into the atomic force microscope-based imaging of living cells and cell image reconstruction, which is of great significance in biological and medical research.

17.
Abdom Radiol (NY) ; 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38940910

ABSTRACT

PURPOSE: To evaluate the image quality of ultra-high-resolution CT (U-HRCT) images reconstructed using an improved deep-learning-reconstruction (DLR) method. Additionally, we assessed the utility of U-HRCT in visualizing gastric wall structure, detecting gastric cancer, and determining the depth of invasion. METHODS: Forty-six patients with resected gastric cancer who underwent preoperative contrast-enhanced U-HRCT were included. The image quality of U-HRCT reconstructed using three different methods (standard DLR [AiCE], improved DLR-AiCE-Body Sharp [improved AiCE-BS], and hybrid-IR [AIDR3D]) was compared. Visualization of the gastric wall's three-layered structure in four regions and the visibility of gastric cancers were compared between U-HRCT and conventional HRCT (C-HRCT). The diagnostic ability of U-HRCT with the improved AiCE-BS for determining the depth of invasion of gastric cancers was assessed using postoperative pathology specimens. RESULTS: The mean noise level of U-HRCT with the improved AiCE-BS was significantly lower than that of the other two methods (p < 0.001). The overall image quality scores of the improved AiCE-BS images were significantly higher (p < 0.001). U-HRCT demonstrated significantly better conspicuity scores for the three-layered structure of the gastric wall than C-HRCT in all regions (p < 0.001). In addition, U-HRCT was found to have superior visibility of gastric cancer in comparison to C-HRCT (p < 0.001). The correct diagnostic rates for determining the depth of invasion of gastric cancer using C-HRCT and U-HRCT were 80%. CONCLUSIONS: U-HRCT reconstructed with the improved AiCE-BS provides clearer visualization of the three-layered gastric wall structure than other reconstruction methods. It is also valuable for detecting gastric cancer and assessing the depth of invasion.

18.
Br J Radiol ; 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38917414

ABSTRACT

OBJECTIVES: To investigate the usefulness of super-resolution deep learning reconstruction (SR-DLR) with cardiac option in the assessment of image quality in patients with stent-assisted coil embolization, coil embolization, and flow-diverting stent placement compared with other image reconstructions. METHODS: This single-center retrospective study included fifty patients (mean age, 59 years; range, 44-81 years; 13 men) who were treated with stent-assisted coil embolization, coil embolization, and flow-diverting stent placement between January and July 2023. The images were reconstructed using filtered back projection (FBP), hybrid iterative reconstruction (IR), and SR-DLR. The objective image analysis included image noise in the Hounsfield unit (HU), signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and full width at half maximum (FWHM). Subjectively, two radiologists evaluated the overall image quality for the visualization of the flow-diverting stent, coil, and stent. RESULTS: The image noise in HU in SR-DLR was 6.99 ± 1.49, which was significantly lower than that in images reconstructed with FBP (12.32 ± 3.01) and hybrid IR (8.63 ± 2.12) (p < 0.001). Both the mean SNR and CNR were significantly higher in SR-DLR than in FBP and hybrid IR (p < 0.001 and p < 0.001). The FWHMs for the stent (p < 0.004), flow-diverting stent (p < 0.001), and coil (p < 0.001) were significantly lower in SR-DLR than in FBP and hybrid IR. The subjective visual scores were significantly higher in SR-DLR than in other image reconstructions (p < 0.001). CONCLUSIONS: SR-DLR with cardiac option is useful for follow-up imaging in stent-assisted coil embolization and flow-diverting stent placement in terms of lower image noise, higher SNR and CNR, superior subjective image analysis, and less blooming artifact than other image reconstructions. ADVANCES IN KNOWLEDGE: SR-DLR with cardiac option allow better visualization of the peripheral and smaller cerebral arteries. SR-DLR with cardiac option can be beneficial for CT imaging of stent-assisted coil embolization and flow-diverting stent.

19.
J Imaging ; 10(6)2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38921614

ABSTRACT

Recent advancements in computer vision, especially deep learning models, have shown considerable promise in tasks related to plant image object detection. However, the efficiency of these deep learning models heavily relies on input image quality, with low-resolution images significantly hindering model performance. Therefore, reconstructing high-quality images through specific techniques will help extract features from plant images, thus improving model performance. In this study, we explored the value of super-resolution technology for improving object detection model performance on plant images. Firstly, we built a comprehensive dataset comprising 1030 high-resolution plant images, named the PlantSR dataset. Subsequently, we developed a super-resolution model using the PlantSR dataset and benchmarked it against several state-of-the-art models designed for general image super-resolution tasks. Our proposed model demonstrated superior performance on the PlantSR dataset, indicating its efficacy in enhancing the super-resolution of plant images. Furthermore, we explored the effect of super-resolution on two specific object detection tasks: apple counting and soybean seed counting. By incorporating super-resolution as a pre-processing step, we observed a significant reduction in mean absolute error. Specifically, with the YOLOv7 model employed for apple counting, the mean absolute error decreased from 13.085 to 5.71. Similarly, with the P2PNet-Soy model utilized for soybean seed counting, the mean absolute error decreased from 19.159 to 15.085. These findings underscore the substantial potential of super-resolution technology in improving the performance of object detection models for accurately detecting and counting specific plants from images. The source codes and associated datasets related to this study are available at Github.

20.
Inverse Probl ; 40(8): 085002, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38933410

ABSTRACT

Supervised deep learning-based methods have inspired a new wave of image reconstruction methods that implicitly learn effective regularization strategies from a set of training data. While they hold potential for improving image quality, they have also raised concerns regarding their robustness. Instabilities can manifest when learned methods are applied to find approximate solutions to ill-posed image reconstruction problems for which a unique and stable inverse mapping does not exist, which is a typical use case. In this study, we investigate the performance of supervised deep learning-based image reconstruction in an alternate use case in which a stable inverse mapping is known to exist but is not yet analytically available in closed form. For such problems, a deep learning-based method can learn a stable approximation of the unknown inverse mapping that generalizes well to data that differ significantly from the training set. The learned approximation of the inverse mapping eliminates the need to employ an implicit (optimization-based) reconstruction method and can potentially yield insights into the unknown analytic inverse formula. The specific problem addressed is image reconstruction from a particular case of radially truncated circular Radon transform (CRT) data, referred to as 'half-time' measurement data. For the half-time image reconstruction problem, we develop and investigate a learned filtered backprojection method that employs a convolutional neural network to approximate the unknown filtering operation. We demonstrate that this method behaves stably and readily generalizes to data that differ significantly from training data. The developed method may find application to wave-based imaging modalities that include photoacoustic computed tomography.

SELECTION OF CITATIONS
SEARCH DETAIL
...