Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 215
Filter
1.
Nat Commun ; 15(1): 4180, 2024 May 16.
Article in English | MEDLINE | ID: mdl-38755148

ABSTRACT

Computational super-resolution methods, including conventional analytical algorithms and deep learning models, have substantially improved optical microscopy. Among them, supervised deep neural networks have demonstrated outstanding performance, however, demanding abundant high-quality training data, which are laborious and even impractical to acquire due to the high dynamics of living cells. Here, we develop zero-shot deconvolution networks (ZS-DeconvNet) that instantly enhance the resolution of microscope images by more than 1.5-fold over the diffraction limit with 10-fold lower fluorescence than ordinary super-resolution imaging conditions, in an unsupervised manner without the need for either ground truths or additional data acquisition. We demonstrate the versatile applicability of ZS-DeconvNet on multiple imaging modalities, including total internal reflection fluorescence microscopy, three-dimensional wide-field microscopy, confocal microscopy, two-photon microscopy, lattice light-sheet microscopy, and multimodal structured illumination microscopy, which enables multi-color, long-term, super-resolution 2D/3D imaging of subcellular bioprocesses from mitotic single cells to multicellular embryos of mouse and C. elegans.


Subject(s)
Caenorhabditis elegans , Microscopy, Fluorescence , Animals , Caenorhabditis elegans/embryology , Microscopy, Fluorescence/methods , Mice , Imaging, Three-Dimensional/methods , Algorithms , Image Processing, Computer-Assisted/methods , Deep Learning
2.
Nat Biotechnol ; 2024 May 27.
Article in English | MEDLINE | ID: mdl-38802562

ABSTRACT

Long-term observation of subcellular dynamics in living organisms is limited by background fluorescence originating from tissue scattering or dense labeling. Existing confocal approaches face an inevitable tradeoff among parallelization, resolution and phototoxicity. Here we present confocal scanning light-field microscopy (csLFM), which integrates axially elongated line-confocal illumination with the rolling shutter in scanning light-field microscopy (sLFM). csLFM enables high-fidelity, high-speed, three-dimensional (3D) imaging at near-diffraction-limit resolution with both optical sectioning and low phototoxicity. By simultaneous 3D excitation and detection, the excitation intensity can be reduced below 1 mW mm-2, with 15-fold higher signal-to-background ratio over sLFM. We imaged subcellular dynamics over 25,000 timeframes in optically challenging environments in different species, such as migrasome delivery in mouse spleen, retractosome generation in mouse liver and 3D voltage imaging in Drosophila. Moreover, csLFM facilitates high-fidelity, large-scale neural recording with reduced crosstalk, leading to high orientation selectivity to visual stimuli, similar to two-photon microscopy, which aids understanding of neural coding mechanisms.

3.
Science ; 384(6692): 202-209, 2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38603505

ABSTRACT

The pursuit of artificial general intelligence (AGI) continuously demands higher computing performance. Despite the superior processing speed and efficiency of integrated photonic circuits, their capacity and scalability are restricted by unavoidable errors, such that only simple tasks and shallow models are realized. To support modern AGIs, we designed Taichi-large-scale photonic chiplets based on an integrated diffractive-interference hybrid design and a general distributed computing architecture that has millions-of-neurons capability with 160-tera-operations per second per watt (TOPS/W) energy efficiency. Taichi experimentally achieved on-chip 1000-category-level classification (testing at 91.89% accuracy in the 1623-category Omniglot dataset) and high-fidelity artificial intelligence-generated content with up to two orders of magnitude of improvement in efficiency. Taichi paves the way for large-scale photonic computing and advanced tasks, further exploiting the flexibility and potential of photonics for modern AGI.

4.
Sci Rep ; 14(1): 6802, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38514718

ABSTRACT

Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames, and feature ultra-high sensitivity at low bandwidth. The new mechanism demonstrates great advantages in challenging scenarios with fast motion and large dynamic range. However, the recorded events might be highly sparse due to either limited hardware bandwidth or extreme photon starvation in harsh environments. To unlock the full potential of event cameras, we propose an inventive event sequence completion approach conforming to the unique characteristics of event data in both the processing stage and the output form. Specifically, we treat event streams as 3D event clouds in the spatiotemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully. To validate the effectiveness of our method comprehensively, we perform extensive experiments on three widely used public datasets with different spatial resolutions, and additionally collect a novel event dataset covering diverse scenarios with highly dynamic motions and under harsh illumination. Besides generating high-quality dense events, our method can benefit downstream applications such as object classification and intensity frame reconstruction.

5.
Nat Commun ; 15(1): 1498, 2024 Feb 19.
Article in English | MEDLINE | ID: mdl-38374085

ABSTRACT

Multimode fiber (MMF) which supports parallel transmission of spatially distributed information is a promising platform for remote imaging and capacity-enhanced optical communication. However, the variability of the scattering MMF channel poses a challenge for achieving long-term accurate transmission over long distances, of which static optical propagation modeling with calibrated transmission matrix or data-driven learning will inevitably degenerate. In this paper, we present a self-supervised dynamic learning approach that achieves long-term, high-fidelity transmission of arbitrary optical fields through unstabilized MMFs. Multiple networks carrying both long- and short-term memory of the propagation model variations are adaptively updated and ensembled to achieve robust image recovery. We demonstrate >99.9% accuracy in the transmission of 1024 spatial degree-of-freedom over 1 km length MMFs lasting over 1000 seconds. The long-term high-fidelity capability enables compressive encoded transfer of high-resolution video with orders of throughput enhancement, offering insights for artificial intelligence promoted diffusive spatial transmission in practical applications.

6.
IEEE Trans Pattern Anal Mach Intell ; 46(4): 2206-2223, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37966934

ABSTRACT

The traditional 3D object retrieval (3DOR) task is under the close-set setting, which assumes the categories of objects in the retrieval stage are all seen in the training stage. Existing methods under this setting may tend to only lazily discriminate their categories, while not learning a generalized 3D object embedding. Under such circumstances, it is still a challenging and open problem in real-world applications due to the existence of various unseen categories. In this paper, we first introduce the open-set 3DOR task to expand the applications of the traditional 3DOR task. Then, we propose the Hypergraph-Based Multi-Modal Representation (HGM 2 R) framework to learn 3D object embeddings from multi-modal representations under the open-set setting. The proposed framework is composed of two modules, i.e., the Multi-Modal 3D Object Embedding (MM3DOE) module and the Structure-Aware and Invariant Knowledge Learning (SAIKL) module. By utilizing the collaborative information of modalities derived from the same 3D object, the MM3DOE module is able to overcome the distinction across different modality representations and generate unified 3D object embeddings. Then, the SAIKL module utilizes the constructed hypergraph structure to model the high-order correlation among 3D objects from both seen and unseen categories. The SAIKL module also includes a memory bank that stores typical representations of 3D objects. By aligning with those memory anchors in the memory bank, the aligned embeddings can integrate the invariant knowledge to exhibit a powerful generalized capacity toward unseen categories. We formally prove that hypergraph modeling has better representative capability on data correlation than graph modeling. We generate four multi-modal datasets for the open-set 3DOR task, i.e., OS-ESB-core, OS-NTU-core, OS-MN40-core, and OS-ABO-core, in which each 3D object contains three modality representations: multi-view, point clouds, and voxel. Experiments on these four datasets show that the proposed method can significantly outperform existing methods. In particular, the proposed method outperforms the state-of-the-art by 12.12%/12.88% in terms of mAP on the OS-MN40-core/OS-ABO-core dataset, respectively. Results and visualizations demonstrate that the proposed method can effectively extract the generalized 3D object embeddings on the open-set 3DOR task and achieve satisfactory performance.

7.
Neural Netw ; 170: 227-241, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37992510

ABSTRACT

Fluorescence microscopes are indispensable tools for the life science research community. Nevertheless, the presence of optical component limitations, coupled with the maximum photon budget that the specimen can tolerate, inevitably leads to a decline in imaging quality and a lack of useful signals. Therefore, image restoration becomes essential for ensuring high-quality and accurate analyses. This paper presents the Wavelet-Enhanced Convolutional-Transformer (WECT), a novel deep learning technique developed specifically for the purpose of reducing noise in microscopy images and attaining super-resolution. Unlike traditional approaches, WECT integrates wavelet transform and inverse-transform for multi-resolution image decomposition and reconstruction, resulting in an expanded receptive field for the network without compromising information integrity. Subsequently, multiple consecutive parallel CNN-Transformer modules are utilized to collaboratively model local and global dependencies, thus facilitating the extraction of more comprehensive and diversified deep features. In addition, the incorporation of generative adversarial networks (GANs) into WECT enhances its capacity to generate high perceptual quality microscopic images. Extensive experiments have demonstrated that the WECT framework outperforms current state-of-the-art restoration methods on real fluorescence microscopy data under various imaging modalities and conditions, in terms of quantitative and qualitative analysis.


Subject(s)
Photons , Wavelet Analysis , Microscopy, Fluorescence , Image Processing, Computer-Assisted
8.
Nat Biomed Eng ; 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38057428

ABSTRACT

Fluorescence microscopy allows for the high-throughput imaging of cellular activity across brain areas in mammals. However, capturing rapid cellular dynamics across the curved cortical surface is challenging, owing to trade-offs in image resolution, speed, field of view and depth of field. Here we report a technique for wide-field fluorescence imaging that leverages selective illumination and the integration of focal areas at different depths via a spinning disc with varying thickness to enable video-rate imaging of previously reconstructed centimetre-scale arbitrarily shaped surfaces at micrometre-scale resolution and at a depth of field of millimetres. By implementing the technique in a microscope capable of acquiring images at 1.68 billion pixels per second and resolving 16.8 billion voxels per second, we recorded neural activities and the trajectories of neutrophils in real time on curved cortical surfaces in live mice. The technique can be integrated into many microscopes and macroscopes, in both reflective and fluorescence modes, for the study of multiscale cellular interactions on arbitrarily shaped surfaces.

9.
NPJ Digit Med ; 6(1): 204, 2023 Nov 04.
Article in English | MEDLINE | ID: mdl-37925578

ABSTRACT

Big data serves as the cornerstone for constructing real-world deep learning systems across various domains. In medicine and healthcare, a single clinical site lacks sufficient data, thus necessitating the involvement of multiple sites. Unfortunately, concerns regarding data security and privacy hinder the sharing and reuse of data across sites. Existing approaches to multi-site clinical learning heavily depend on the security of the network firewall and system implementation. To address this issue, we propose Relay Learning, a secure deep-learning framework that physically isolates clinical data from external intruders while still leveraging the benefits of multi-site big data. We demonstrate the efficacy of Relay Learning in three medical tasks of different diseases and anatomical structures, including structure segmentation of retina fundus, mediastinum tumors diagnosis, and brain midline localization. We evaluate Relay Learning by comparing its performance to alternative solutions through multi-site validation and external validation. Incorporating a total of 41,038 medical images from 21 medical hosts, including 7 external hosts, with non-uniform distributions, we observe significant performance improvements with Relay Learning across all three tasks. Specifically, it achieves an average performance increase of 44.4%, 24.2%, and 36.7% for retinal fundus segmentation, mediastinum tumor diagnosis, and brain midline localization, respectively. Remarkably, Relay Learning even outperforms central learning on external test sets. In the meanwhile, Relay Learning keeps data sovereignty locally without cross-site network connections. We anticipate that Relay Learning will revolutionize clinical multi-site collaboration and reshape the landscape of healthcare in the future.

10.
Nat Methods ; 20(12): 1957-1970, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37957429

ABSTRACT

Fluorescence microscopy has become an indispensable tool for revealing the dynamic regulation of cells and organelles. However, stochastic noise inherently restricts optical interrogation quality and exacerbates observation fidelity when balancing the joint demands of high frame rate, long-term recording and low phototoxicity. Here we propose DeepSeMi, a self-supervised-learning-based denoising framework capable of increasing signal-to-noise ratio by over 12 dB across various conditions. With the introduction of newly designed eccentric blind-spot convolution filters, DeepSeMi effectively denoises images with no loss of spatiotemporal resolution. In combination with confocal microscopy, DeepSeMi allows for recording organelle interactions in four colors at high frame rates across tens of thousands of frames, monitoring migrasomes and retractosomes over a half day, and imaging ultra-phototoxicity-sensitive Dictyostelium cells over thousands of frames. Through comprehensive validations across various samples and instruments, we prove DeepSeMi to be a versatile and biocompatible tool for breaking the shot-noise limit.


Subject(s)
Dictyostelium , Image Enhancement , Microscopy, Confocal/methods , Signal-To-Noise Ratio , Microscopy, Fluorescence , Image Processing, Computer-Assisted/methods
11.
Cell Rep ; 42(10): 113313, 2023 10 31.
Article in English | MEDLINE | ID: mdl-37858461

ABSTRACT

This study investigates stress's impact on Alzheimer's disease (AD) using male APP/PS1 transgenic mice. Negative stressors (chronic social defeat, restraint) and positive hedonia (environmental enrichment, EE) were applied. Stress worsens AD pathology, while EE slows progression. Brain RNA sequencing reveals interleukin-6 (IL-6) and IL-10 as key stress-related AD regulators. Flow cytometry shows that the CD8+/CD4+ T cell ratio shifts in response to stress exposure and EE. Stress exposure increases CD8+/CD4+ ratio, opposite to EE. Depletion and enrichment of CD8+ T cells both accelerate AD, indicating immune intervention's negative impact. Stress management and balanced immunity may aid AD therapy, highlighting novel potential treatment.


Subject(s)
Alzheimer Disease , Mice , Animals , Male , Alzheimer Disease/pathology , CD8-Positive T-Lymphocytes/metabolism , Mice, Transgenic , Brain/metabolism , Interleukin-6 , Disease Models, Animal , Amyloid beta-Protein Precursor/metabolism , Amyloid beta-Peptides/metabolism
12.
Nature ; 623(7985): 48-57, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37880362

ABSTRACT

Photonic computing enables faster and more energy-efficient processing of vision data1-5. However, experimental superiority of deployable systems remains a challenge because of complicated optical nonlinearities, considerable power consumption of analog-to-digital converters (ADCs) for downstream digital processing and vulnerability to noises and system errors1,6-8. Here we propose an all-analog chip combining electronic and light computing (ACCEL). It has a systemic energy efficiency of 74.8 peta-operations per second per watt and a computing speed of 4.6 peta-operations per second (more than 99% implemented by optics), corresponding to more than three and one order of magnitude higher than state-of-the-art computing processors, respectively. After applying diffractive optical computing as an optical encoder for feature extraction, the light-induced photocurrents are directly used for further calculation in an integrated analog computing chip without the requirement of analog-to-digital converters, leading to a low computing latency of 72 ns for each frame. With joint optimizations of optoelectronic computing and adaptive training, ACCEL achieves competitive classification accuracies of 85.5%, 82.0% and 92.6%, respectively, for Fashion-MNIST, 3-class ImageNet classification and time-lapse video recognition task experimentally, while showing superior system robustness in low-light conditions (0.14 fJ µm-2 each frame). ACCEL can be used across a broad range of applications such as wearable devices, autonomous driving and industrial inspections.

13.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14081-14097, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37527291

ABSTRACT

Recent years have witnessed remarkable achievements in video-based action recognition. Apart from traditional frame-based cameras, event cameras are bio-inspired vision sensors that only record pixel-wise brightness changes rather than the brightness value. However, little effort has been made in event-based action recognition, and large-scale public datasets are also nearly unavailable. In this paper, we propose an event-based action recognition framework called EV-ACT. The Learnable Multi-Fused Representation (LMFR) is first proposed to integrate multiple event information in a learnable manner. The LMFR with dual temporal granularity is fed into the event-based slow-fast network for the fusion of appearance and motion features. A spatial-temporal attention mechanism is introduced to further enhance the learning capability of action recognition. To prompt research in this direction, we have collected the largest event-based action recognition benchmark named THUE-ACT-50 and the accompanying THUE-ACT-50-CHL dataset under challenging environments, including a total of over 12,830 recordings from 50 action categories, which is over 4 times the size of the previous largest dataset. Experimental results show that our proposed framework could achieve improvements of over 14.5%, 7.6%, 11.2%, and 7.4% compared to previous works on four benchmarks. We have also deployed our proposed EV-ACT framework on a mobile platform to validate its practicality and efficiency.

14.
Cell Rep Med ; 4(9): 101164, 2023 09 19.
Article in English | MEDLINE | ID: mdl-37652014

ABSTRACT

Deep learning has yielded promising results for medical image diagnosis but relies heavily on manual image annotations, which are expensive to acquire. We present Cross-DL, a cross-modality learning framework for intracranial abnormality detection and localization in head computed tomography (CT) scans by learning from free-text imaging reports. Cross-DL has a discretizer that automatically extracts discrete labels of abnormality types and locations from reports, which are utilized to train an image analyzer by a dynamic multi-instance learning approach. Benefiting from the low annotation cost and a consequent large-scale training set of 28,472 CT scans, Cross-DL achieves accurate performance, with an average area under the receiver operating characteristic curve (AUROC) of 0.956 (95% confidence interval: 0.952-0.959) in detecting 4 abnormality types in 17 regions while accurately localizing abnormalities at the voxel level. An intracranial hemorrhage classification experiment on the external dataset CQ500 achieves an AUROC of 0.928 (0.905-0.951). The model can also help review prioritization.


Subject(s)
Tomography, X-Ray Computed , Area Under Curve , ROC Curve
15.
Nat Commun ; 14(1): 5043, 2023 Aug 19.
Article in English | MEDLINE | ID: mdl-37598234

ABSTRACT

Multi-spectral imaging is a fundamental tool characterizing the constituent energy of scene radiation. However, current multi-spectral video cameras cannot scale up beyond megapixel resolution due to optical constraints and the complexity of the reconstruction algorithms. To circumvent the above issues, we propose a tens-of-megapixel handheld multi-spectral videography approach (THETA), with a proof-of-concept camera achieving 65-megapixel videography of 12 wavebands within visible light range. The high performance is brought by multiple designs: We propose an imaging scheme to fabricate a thin mask for encoding spatio-spectral data using a conventional film camera. Afterwards, a fiber optic plate is introduced for building a compact prototype supporting pixel-wise encoding with a large space-bandwidth product. Finally, a deep-network-based algorithm is adopted for large-scale multi-spectral data decoding, with the coding pattern specially designed to facilitate efficient coarse-to-fine model training. Experimentally, we demonstrate THETA's advantageous and wide applications in outdoor imaging of large macroscopic scenes.

16.
Light Sci Appl ; 12(1): 172, 2023 Jul 12.
Article in English | MEDLINE | ID: mdl-37433801

ABSTRACT

Structured illumination microscopy (SIM) has become the standard for next-generation wide-field microscopy, offering ultrahigh imaging speed, superresolution, a large field-of-view, and long-term imaging. Over the past decade, SIM hardware and software have flourished, leading to successful applications in various biological questions. However, unlocking the full potential of SIM system hardware requires the development of advanced reconstruction algorithms. Here, we introduce the basic theory of two SIM algorithms, namely, optical sectioning SIM (OS-SIM) and superresolution SIM (SR-SIM), and summarize their implementation modalities. We then provide a brief overview of existing OS-SIM processing algorithms and review the development of SR-SIM reconstruction algorithms, focusing primarily on 2D-SIM, 3D-SIM, and blind-SIM. To showcase the state-of-the-art development of SIM systems and assist users in selecting a commercial SIM system for a specific application, we compare the features of representative off-the-shelf SIM systems. Finally, we provide perspectives on the potential future developments of SIM.

17.
Nat Commun ; 14(1): 4118, 2023 Jul 11.
Article in English | MEDLINE | ID: mdl-37433856

ABSTRACT

The optical microscope is customarily an instrument of substantial size and expense but limited performance. Here we report an integrated microscope that achieves optical performance beyond a commercial microscope with a 5×, NA 0.1 objective but only at 0.15 cm3 and 0.5 g, whose size is five orders of magnitude smaller than that of a conventional microscope. To achieve this, a progressive optimization pipeline is proposed which systematically optimizes both aspherical lenses and diffractive optical elements with over 30 times memory reduction compared to the end-to-end optimization. By designing a simulation-supervision deep neural network for spatially varying deconvolution during optical design, we accomplish over 10 times improvement in the depth-of-field compared to traditional microscopes with great generalization in a wide variety of samples. To show the unique advantages, the integrated microscope is equipped in a cell phone without any accessories for the application of portable diagnostics. We believe our method provides a new framework for the design of miniaturized high-performance imaging systems by integrating aspherical optics, computational optics, and deep learning.

18.
Nat Methods ; 20(7): 958-961, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37433996
19.
Nucleic Acids Res ; 51(16): 8348-8366, 2023 09 08.
Article in English | MEDLINE | ID: mdl-37439331

ABSTRACT

Genomic and transcriptomic image data, represented by DNA and RNA fluorescence in situ hybridization (FISH), respectively, together with proteomic data, particularly that related to nuclear proteins, can help elucidate gene regulation in relation to the spatial positions of chromatins, messenger RNAs, and key proteins. However, methods for image-based multi-omics data collection and analysis are lacking. To this end, we aimed to develop the first integrative browser called iSMOD (image-based Single-cell Multi-omics Database) to collect and browse comprehensive FISH and nucleus proteomics data based on the title, abstract, and related experimental figures, which integrates multi-omics studies focusing on the key players in the cell nucleus from 20 000+ (still growing) published papers. We have also provided several exemplar demonstrations to show iSMOD's wide applications-profiling multi-omics research to reveal the molecular target for diseases; exploring the working mechanism behind biological phenomena using multi-omics interactions, and integrating the 3D multi-omics data in a virtual cell nucleus. iSMOD is a cornerstone for delineating a global view of relevant research to enable the integration of scattered data and thus provides new insights regarding the missing components of molecular pathway mechanisms and facilitates improved and efficient scientific research.


Subject(s)
Multiomics , Proteomics , In Situ Hybridization, Fluorescence , Genomics/methods , Gene Expression Profiling
SELECTION OF CITATIONS
SEARCH DETAIL
...