Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Ophthalmol Sci ; 4(4): 100493, 2024.
Article in English | MEDLINE | ID: mdl-38682031

ABSTRACT

Purpose: To provide an automated system for synthesizing fluorescein angiography (FA) images from color fundus photographs for averting risks associated with fluorescein dye and extend its future application to spaceflight associated neuro-ocular syndrome (SANS) detection in spaceflight where resources are limited. Design: Development and validation of a novel conditional generative adversarial network (GAN) trained on limited amount of FA and color fundus images with diabetic retinopathy and control cases. Participants: Color fundus and FA paired images for unique patients were collected from a publicly available study. Methods: FA4SANS-GAN was trained to generate FA images from color fundus photographs using 2 multiscale generators coupled with 2 patch-GAN discriminators. Eight hundred fifty color fundus and FA images were utilized for training by augmenting images from 17 unique patients. The model was evaluated on 56 fluorescein images collected from 14 unique patients. In addition, it was compared with 3 other GAN architectures trained on the same data set. Furthermore, we test the robustness of the models against acquisition noise and retaining structural information when introduced to artificially created biological markers. Main Outcome Measures: For GAN synthesis, metric Fréchet Inception Distance (FID) and Kernel Inception Distance (KID). Also, two 1-sided tests (TOST) based on Welch's t test for measuring statistical significance. Results: On test FA images, mean FID for FA4SANS-GAN was 39.8 (standard deviation, 9.9), which is better than GANgio model's mean of 43.2 (standard deviation, 13.7), Pix2PixHD's mean of 57.3 (standard deviation, 11.5) and Pix2Pix's mean of 67.5 (standard deviation, 11.7). Similarly for KID, FA4SANS-GAN achieved mean of 0.00278 (standard deviation, 0.00167) which is better than other 3 model's mean KID of 0.00303 (standard deviation, 0.00216), 0.00609 (standard deviation, 0.00238), 0.00784 (standard deviation, 0.00218). For TOST measurement, FA4SANS-GAN was proven to be statistically significant versus GANgio (P = 0.006); versus Pix2PixHD (P < 0.00001); and versus Pix2Pix (P < 0.00001). Conclusions: Our study has shown FA4SANS-GAN to be statistically significant for 2 GAN synthesis metrics. Moreover, it is robust against acquisition noise, and can retain clear biological markers compared with the other 3 GAN architectures. This deployment of this model can be crucial in the International Space Station for detecting SANS. Financial Disclosures: The authors have no proprietary or commercial interest in any materials discussed in this article.

2.
NPJ Microgravity ; 10(1): 40, 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38548790

ABSTRACT

Spaceflight associated neuro-ocular syndrome (SANS) is one of the largest physiologic barriers to spaceflight and requires evaluation and mitigation for future planetary missions. As the spaceflight environment is a clinically limited environment, the purpose of this research is to provide automated, early detection and prognosis of SANS with a machine learning model trained and validated on astronaut SANS optical coherence tomography (OCT) images. In this study, we present a lightweight convolutional neural network (CNN) incorporating an EfficientNet encoder for detecting SANS from OCT images titled "SANS-CNN." We used 6303 OCT B-scan images for training/validation (80%/20% split) and 945 for testing with a combination of terrestrial images and astronaut SANS images for both testing and validation. SANS-CNN was validated with SANS images labeled by NASA to evaluate accuracy, specificity, and sensitivity. To evaluate real-world outcomes, two state-of-the-art pre-trained architectures were also employed on this dataset. We use GRAD-CAM to visualize activation maps of intermediate layers to test the interpretability of SANS-CNN's prediction. SANS-CNN achieved 84.2% accuracy on the test set with an 85.6% specificity, 82.8% sensitivity, and 84.1% F1-score. Moreover, SANS-CNN outperforms two other state-of-the-art pre-trained architectures, ResNet50-v2 and MobileNet-v2, in accuracy by 21.4% and 13.1%, respectively. We also apply two class-activation map techniques to visualize critical SANS features perceived by the model. SANS-CNN represents a CNN model trained and validated with real astronaut OCT images, enabling fast and efficient prediction of SANS-like conditions for spaceflight missions beyond Earth's orbit in which clinical and computational resources are extremely limited.

3.
J Vis ; 23(11): 54, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37733524

ABSTRACT

Spaceflight-associated neuro-ocular syndrome (SANS) is a collection of neuro-ophthalmic findings that occurs in astronauts as a result of prolonged microgravity exposure in space. Due to limited resources on board long-term spaceflight missions, early disease diagnosis and prognosis of SANS become unviable. Moreover, the current retinal imaging techniques onboard the international space station (ISS), such as optical coherence tomography (OCT), ultrasound imaging, and fundus photography, require an expert to distinguish between SANS and similar ophthalmic diseases. With the advent of Deep Learning, diagnosing diseases (such as diabetic retinopathy) from structural retinal images are being automated. In this study, we propose a lightweight convolutional neural network incorporating an EfficientNet encoder for detecting SANS from OCT images. We used 6303 OCT B-scan images for training/validation (80%/20% split) and 945 for testing. Our model achieved 84.2% accuracy on the test set, i.e., 85.6% specificity, and 82.8% sensitivity. Moreover, it outperforms two other state-of-the-art pre-trained architectures, ResNet50-v2 and MobileNet-v2, by 21.4% and 13.1%. Additionally, we use GRAD-CAM to visualize activation maps of intermediate layers to test the interpretability of our model's prediction. The proposed architecture enables fast and efficient prediction of SANS-like conditions for future long-term spaceflight mission in which computational and clinical resources are limited.


Subject(s)
Neural Networks, Computer , Space Flight , Humans , Retina , Tomography, Optical Coherence
4.
iScience ; 25(5): 104277, 2022 May 20.
Article in English | MEDLINE | ID: mdl-35573197

ABSTRACT

Cellular imaging instrumentation advancements as well as readily available optogenetic and fluorescence sensors have yielded a profound need for fast, accurate, and standardized analysis. Deep-learning architectures have revolutionized the field of biomedical image analysis and have achieved state-of-the-art accuracy. Despite these advancements, deep learning architectures for the segmentation of subcellular fluorescence signals is lacking. Cellular dynamic fluorescence signals can be plotted and visualized using spatiotemporal maps (STMaps), and currently their segmentation and quantification are hindered by slow workflow speed and lack of accuracy, especially for large datasets. In this study, we provide a software tool that utilizes a deep-learning methodology to fundamentally overcome signal segmentation challenges. The software framework demonstrates highly optimized and accurate calcium signal segmentation and provides a fast analysis pipeline that can accommodate different patterns of signals across multiple cell types. The software allows seamless data accessibility, quantification, and graphical visualization and enables large dataset analysis throughput.

5.
STAR Protoc ; 3(4): 101852, 2022 12 16.
Article in English | MEDLINE | ID: mdl-36595928

ABSTRACT

Cellular calcium fluorescence imaging utilized to study cellular behaviors typically results in large datasets and a profound need for standardized and accurate analysis methods. Here, we describe open-source software (4SM) to overcome these limitations using an automated machine learning pipeline for subcellular calcium signal segmentation of spatiotemporal maps. The primary use of 4SM is to analyze spatiotemporal maps of calcium activities within cells or across multiple cells. For complete details on the use and execution of this protocol, please refer to Kamran et al. (2022).1.


Subject(s)
Calcium , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Software , Machine Learning
6.
Sci Rep ; 10(1): 21580, 2020 12 09.
Article in English | MEDLINE | ID: mdl-33299065

ABSTRACT

Fluorescein angiography (FA) is a procedure used to image the vascular structure of the retina and requires the insertion of an exogenous dye with potential adverse side effects. Currently, there is only one alternative non-invasive system based on Optical coherence tomography (OCT) technology, called OCT angiography (OCTA), capable of visualizing retina vasculature. However, due to its cost and limited view, OCTA technology is not widely used. Retinal fundus photography is a safe imaging technique used for capturing the overall structure of the retina. In order to visualize retinal vasculature without the need for FA and in a cost-effective, non-invasive, and accurate manner, we propose a deep learning conditional generative adversarial network (GAN) capable of producing FA images from fundus photographs. The proposed GAN produces anatomically accurate angiograms, with similar fidelity to FA images, and significantly outperforms two other state-of-the-art generative algorithms ([Formula: see text] and [Formula: see text]). Furthermore, evaluations by experts shows that our proposed model produces such high quality FA images that are indistinguishable from real angiograms. Our model as the first application of artificial intelligence and deep learning to medical image translation, by employing a theoretical framework capable of establishing a shared feature-space between two domains (i.e. funduscopy and fluorescein angiography) provides an unrivaled way for the translation of images from one domain to the other.


Subject(s)
Deep Learning , Diagnostic Techniques, Ophthalmological , Fluorescein Angiography/methods , Fundus Oculi , Neural Networks, Computer , Retina/diagnostic imaging , Humans , Tomography, Optical Coherence/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...