Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
bioRxiv ; 2024 Aug 21.
Article in English | MEDLINE | ID: mdl-39229026

ABSTRACT

Chromatin-sensitive Partial Wave Spectroscopic (csPWS) microscopy offers a non-invasive glimpse into the mass density distribution of cellular structures at the nanoscale, leveraging the spectroscopic information. Such capability allows us to analyze the chromatin structure and organization and the global transcriptional state of the cell nuclei for the study of its role in carcinogenesis. Accurate segmentation of the nuclei in csPWS microscopy images is an essential step in isolating them for further analysis. However, manual segmentation is error-prone, biased, time-consuming, and laborious, resulting in disrupted nuclear boundaries with partial or over-segmentation. Here, we present an innovative deep-learning-driven approach to automate the accurate nuclei segmentation of label-free live cell csPWS microscopy imaging data. Our approach, csPWS-seg, harnesses the Convolutional Neural Networks-based U-Net model with an attention mechanism to automate the accurate cell nuclei segmentation of csPWS microscopy images. We leveraged the structural, physical, and biological differences between the cytoplasm, nucleus, and nuclear periphery to construct three distinct csPWS feature images for nucleus segmentation. Using these images of HCT116 cells, csPWS-seg achieved superior performance with a median Intersection over Union (IoU) of 0.80 and a Dice Similarity Coefficient (DSC) score of 0.88. The csPWS-seg overcame the segmentation performance over the baseline U-Net model and another attention-based model, SE-U-Net, marking a significant improvement in segmentation accuracy. Further, we analyzed the performance of our proposed model with four loss functions: binary cross-entropy loss, focal loss, dice loss, and Jaccard loss. The csPWS-seg with focal loss provided the best results compared to other loss functions. The automatic and accurate nuclei segmentation offered by the csPWS-seg not only automates, accelerates, and streamlines csPWS data analysis but also enhances the reliability of subsequent chromatin analysis research, paving the way for more accurate diagnostics, treatment, and understanding of cellular mechanisms for carcinogenesis.

2.
J Biomed Opt ; 29(6): 066501, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38799979

ABSTRACT

Significance: Spectroscopic single-molecule localization microscopy (sSMLM) takes advantage of nanoscopy and spectroscopy, enabling sub-10 nm resolution as well as simultaneous multicolor imaging of multi-labeled samples. Reconstruction of raw sSMLM data using deep learning is a promising approach for visualizing the subcellular structures at the nanoscale. Aim: Develop a novel computational approach leveraging deep learning to reconstruct both label-free and fluorescence-labeled sSMLM imaging data. Approach: We developed a two-network-model based deep learning algorithm, termed DsSMLM, to reconstruct sSMLM data. The effectiveness of DsSMLM was assessed by conducting imaging experiments on diverse samples, including label-free single-stranded DNA (ssDNA) fiber, fluorescence-labeled histone markers on COS-7 and U2OS cells, and simultaneous multicolor imaging of synthetic DNA origami nanoruler. Results: For label-free imaging, a spatial resolution of 6.22 nm was achieved on ssDNA fiber; for fluorescence-labeled imaging, DsSMLM revealed the distribution of chromatin-rich and chromatin-poor regions defined by histone markers on the cell nucleus and also offered simultaneous multicolor imaging of nanoruler samples, distinguishing two dyes labeled in three emitting points with a separation distance of 40 nm. With DsSMLM, we observed enhanced spectral profiles with 8.8% higher localization detection for single-color imaging and up to 5.05% higher localization detection for simultaneous two-color imaging. Conclusions: We demonstrate the feasibility of deep learning-based reconstruction for sSMLM imaging applicable to label-free and fluorescence-labeled sSMLM imaging data. We anticipate our technique will be a valuable tool for high-quality super-resolution imaging for a deeper understanding of DNA molecules' photophysics and will facilitate the investigation of multiple nanoscopic cellular structures and their interactions.


Subject(s)
Deep Learning , Single Molecule Imaging , Animals , Single Molecule Imaging/methods , Humans , Chlorocebus aethiops , COS Cells , Microscopy, Fluorescence/methods , Image Processing, Computer-Assisted/methods , DNA, Single-Stranded/chemistry , DNA, Single-Stranded/analysis , Algorithms , Histones/chemistry , Histones/analysis
3.
J Biomed Opt ; 26(2)2021 02.
Article in English | MEDLINE | ID: mdl-33641269

ABSTRACT

SIGNIFICANCE: Single-molecule localization-based super-resolution microscopy has enabled the imaging of microscopic objects beyond the diffraction limit. However, this technique is limited by the requirements of imaging an extremely large number of frames of biological samples to generate a super-resolution image, thus requiring a longer acquisition time. Additionally, the processing of such a large image sequence leads to longer data processing time. Therefore, accelerating image acquisition and processing in single-molecule localization microscopy (SMLM) has been of perennial interest. AIM: To accelerate three-dimensional (3D) SMLM imaging by leveraging a computational approach without compromising the resolution. APPROACH: We used blind sparse inpainting to reconstruct high-density 3D images from low-density ones. The low-density images are generated using much fewer frames than usually needed, thus requiring a shorter acquisition and processing time. Therefore, our technique will accelerate 3D SMLM without changing the existing standard SMLM hardware system and labeling protocol. RESULTS: The performance of the blind sparse inpainting was evaluated on both simulation and experimental datasets. Superior reconstruction results of 3D SMLM images using up to 10-fold fewer frames in simulation and up to 50-fold fewer frames in experimental data were achieved. CONCLUSIONS: We demonstrate the feasibility of fast 3D SMLM imaging leveraging a computational approach to reduce the number of acquired frames. We anticipate our technique will enable future real-time live-cell 3D imaging to investigate complex nanoscopic biological structures and their functions.


Subject(s)
Microscopy , Single Molecule Imaging , Computer Simulation , Imaging, Three-Dimensional
4.
Quant Imaging Med Surg ; 10(9): 1748-1762, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32879854

ABSTRACT

BACKGROUND: MRI acceleration using deep learning (DL) convolutional neural networks (CNNs) is a novel technique with great promise. Increasing the number of convolutional layers may allow for more accurate image reconstruction. Studies on evaluating the diagnostic interchangeability of DL reconstructed knee magnetic resonance (MR) images are scarce. The purpose of this study was to develop a deep CNN (DCNN) with an optimal number of layers for accelerating knee magnetic resonance imaging (MRI) acquisition by 6-fold and to test the diagnostic interchangeability and image quality of nonaccelerated images versus images reconstructed with a 15-layer DCNN or 3-layer CNN. METHODS: For the feasibility portion of this study, 10 patients were randomly selected from the Osteoarthritis Initiative (OAI) cohort. For the interchangeability portion of the study, 40 patients were randomly selected from the OAI cohort. Three readers assessed meniscal and anterior cruciate ligament (ACL) tears and cartilage defects using DCNN, CNN, and nonaccelerated images. Image quality was subjectively graded as nondiagnostic, poor, acceptable, or excellent. Interchangeability was tested by comparing the frequency of agreement when readers used both accelerated and nonaccelerated images to frequency of agreement when readers only used nonaccelerated images. A noninferiority margin of 0.10 was used to ensure type I error ≤5% and power ≥80%. A logistic regression model using generalized estimating equations was used to compare proportions; 95% confidence intervals (CIs) were constructed. RESULTS: DCNN and CNN images were interchangeable with nonaccelerated images for all structures, with excess disagreement values ranging from -2.5% [95% CI: (-6.1, 1.1)] to 3.0% [95% CI: (-0.1, 6.1)]. The quality of DCNN images was graded higher than that of CNN images but less than that of nonaccelerated images [excellent/acceptable quality: DCNN, 95% of cases (114/120); CNN, 60% (72/120); nonaccelerated, 97.5% (117/120)]. CONCLUSIONS: Six-fold accelerated knee images reconstructed with a DL technique are diagnostically interchangeable with nonaccelerated images and have acceptable image quality when using a 15-layer CNN.

SELECTION OF CITATIONS
SEARCH DETAIL