Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Arterioscler Thromb Vasc Biol ; 44(7): 1584-1600, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38779855

ABSTRACT

BACKGROUND: Analysis of vascular networks is an essential step to unravel the mechanisms regulating the physiological and pathological organization of blood vessels. So far, most of the analyses are performed using 2-dimensional projections of 3-dimensional (3D) networks, a strategy that has several obvious shortcomings. For instance, it does not capture the true geometry of the vasculature and generates artifacts on vessel connectivity. These limitations are accepted in the field because manual analysis of 3D vascular networks is a laborious and complex process that is often prohibitive for large volumes. METHODS: To overcome these issues, we developed 3DVascNet, a deep learning-based software for automated segmentation and quantification of 3D retinal vascular networks. 3DVascNet performs segmentation based on a deep learning model, and it quantifies vascular morphometric parameters such as vessel density, branch length, vessel radius, and branching point density. We tested the performance of 3DVascNet using a large data set of 3D microscopy images of mouse retinal blood vessels. RESULTS: We demonstrated that 3DVascNet efficiently segments vascular networks in 3D and that vascular morphometric parameters capture phenotypes detected by using manual segmentation and quantification in 2 dimension. In addition, we showed that, despite being trained on retinal images, 3DVascNet has high generalization capability and successfully segments images originating from other data sets and organs. CONCLUSIONS: Overall, we present 3DVascNet, a freely available software that includes a user-friendly graphical interface for researchers with no programming experience, which will greatly facilitate the ability to study vascular networks in 3D in health and disease. Moreover, the source code of 3DVascNet is publicly available, thus it can be easily extended for the analysis of other 3D vascular networks by other users.


Subject(s)
Deep Learning , Imaging, Three-Dimensional , Retinal Vessels , Software , Animals , Retinal Vessels/diagnostic imaging , Imaging, Three-Dimensional/methods , Mice , Mice, Inbred C57BL , Image Interpretation, Computer-Assisted , Automation , Reproducibility of Results
2.
Article in English | MEDLINE | ID: mdl-38083101

ABSTRACT

In recent years, deep learning models have been extensively applied for the segmentation of microscopy images to efficiently and accurately quantify and characterize cells, nuclei, and other biological structures. However, typically these are supervised models that require large amounts of training data that are manually annotated to create the ground-truth. Since manual annotation of these segmentation masks is difficult and time-consuming, specially in 3D, we sought to develop a self-supervised segmentation method.Our method is based on an image-to-image translation model, the CycleGAN, which we use to learn the mapping from the fluorescence microscopy images domain to the segmentation domain. We exploit the fact that CycleGAN does not require paired data and train the model using synthetic masks, instead of manually labeled masks. These masks are created automatically based on the approximate shapes and sizes of the nuclei and Golgi, thus manual image segmentation is not needed in our proposed approach.The experimental results obtained with the proposed CycleGAN model are compared with two well-known supervised segmentation models: 3D U-Net [1] and Vox2Vox [2]. The CycleGAN model led to the following results: Dice coefficient of 78.07% for the nuclei class and 67.73% for the Golgi class with a difference of only 1.4% and 0.61% compared to the best results obtained with the supervised models Vox2Vox and 3D U-Net, respectively. Moreover, training and testing the CycleGAN model is about 5.78 times faster in comparison with the 3D U-Net model. Our results show that without manual annotation effort we can train a model that performs similarly to supervised models for the segmentation of organelles in 3D microscopy images.Clinical relevance- Segmentation of cell organelles in microscopy images is an important step to extract several features, such as the morphology, density, size, shape and texture of these organelles. These quantitative analyses provide valuable information to classify and diagnose diseases, and to study biological processes.


Subject(s)
Cell Nucleus , Masks , Microscopy, Fluorescence
3.
PLoS One ; 18(11): e0294793, 2023.
Article in English | MEDLINE | ID: mdl-37976273

ABSTRACT

[This corrects the article DOI: 10.1371/journal.pone.0280998.].

4.
PLoS One ; 18(2): e0280998, 2023.
Article in English | MEDLINE | ID: mdl-36780440

ABSTRACT

Butterflies are increasingly becoming model insects where basic questions surrounding the diversity of their color patterns are being investigated. Some of these color patterns consist of simple spots and eyespots. To accelerate the pace of research surrounding these discrete and circular pattern elements we trained distinct convolutional neural networks (CNNs) for detection and measurement of butterfly spots and eyespots on digital images of butterfly wings. We compared the automatically detected and segmented spot/eyespot areas with those manually annotated. These methods were able to identify and distinguish marginal eyespots from spots, as well as distinguish these patterns from less symmetrical patches of color. In addition, the measurements of an eyespot's central area and surrounding rings were comparable with the manual measurements. These CNNs offer improvements of eyespot/spot detection and measurements relative to previous methods because it is not necessary to mathematically define the feature of interest. All that is needed is to point out the images that have those features to train the CNN.


Subject(s)
Butterflies , Moths , Animals , Pigmentation , Neural Networks, Computer , Wings, Animal
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 549-552, 2022 07.
Article in English | MEDLINE | ID: mdl-36086569

ABSTRACT

Fluorescence microscopy images of cell organelles enable the study of various complex biological processes. Recently, deep learning (DL) models are being used for the accurate automatic analysis of these images. DL models present state-of-the-art performance in many image analysis tasks such as object classification, segmentation and detection. However, to train a DL model a large manually annotated dataset is required. Manual annotation of 3D microscopy images is a time-consuming task and must be performed by specialists in the area. Thus, only a few images with annotations are typically available. Recent advances in generative adversarial networks (GANs) have allowed the translation of images with some conditions into realistic looking synthetic images. Therefore, in this work we explore approaches based on GANs to create synthetic 3D microscopy images. We compare four approaches that differ in the conditions of the input image. The quality of the generated images was assessed visually and using a quantitative objective GAN evaluation metric. The results showed that the GAN is able to generate synthetic images similar to the real ones. Hence, we have presented a method based on GANs to overcome the issue of small annotated datasets in the biomedical imaging field.


Subject(s)
Image Processing, Computer-Assisted , Research Design , Image Processing, Computer-Assisted/methods , Microscopy, Fluorescence
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3017-3020, 2021 11.
Article in English | MEDLINE | ID: mdl-34891879

ABSTRACT

Blood vessels provide oxygen and nutrients to all tissues in the human body, and their incorrect organisation or dysfunction contributes to several diseases. Correct organisation of blood vessels is achieved through vascular patterning, a process that relies on endothelial cell polarization and migration against the blood flow direction. Unravelling the mechanisms governing endothelial cell polarity is essential to study the process of vascular patterning. Cell polarity is defined by a vector that goes from the nucleus centroid to the corresponding Golgi complex centroid, here defined as axial polarity. Currently, axial polarity is calculated manually, which is time-consuming and subjective. In this work, we used a deep learning approach to segment nuclei and Golgi in 3D fluorescence microscopy images of mouse retinas, and to assign nucleus-Golgi pairs. This approach predicts nuclei and Golgi segmentation masks but also a third mask corresponding to joint nuclei and Golgi segmentations. The joint segmentation mask is used to perform nucleus-Golgi pairing. We demonstrate that our deep learning approach using three masks successfully identifies nucleus-Golgi pairs, outperforming a pairing method based on a cost matrix. Our results pave the way for automated computation of axial polarity in 3D tissues and in vivo.


Subject(s)
Cell Nucleus , Imaging, Three-Dimensional , Animals , Golgi Apparatus , Mice , Microscopy, Fluorescence
7.
Sci Rep ; 11(1): 19278, 2021 09 29.
Article in English | MEDLINE | ID: mdl-34588507

ABSTRACT

The cell nucleus is a tightly regulated organelle and its architectural structure is dynamically orchestrated to maintain normal cell function. Indeed, fluctuations in nuclear size and shape are known to occur during the cell cycle and alterations in nuclear morphology are also hallmarks of many diseases including cancer. Regrettably, automated reliable tools for cell cycle staging at single cell level using in situ images are still limited. It is therefore urgent to establish accurate strategies combining bioimaging with high-content image analysis for a bona fide classification. In this study we developed a supervised machine learning method for interphase cell cycle staging of individual adherent cells using in situ fluorescence images of nuclei stained with DAPI. A Support Vector Machine (SVM) classifier operated over normalized nuclear features using more than 3500 DAPI stained nuclei. Molecular ground truth labels were obtained by automatic image processing using fluorescent ubiquitination-based cell cycle indicator (Fucci) technology. An average F1-Score of 87.7% was achieved with this framework. Furthermore, the method was validated on distinct cell types reaching recall values higher than 89%. Our method is a robust approach to identify cells in G1 or S/G2 at the individual level, with implications in research and clinical applications.


Subject(s)
Cell Nucleus/physiology , Image Processing, Computer-Assisted , Interphase/physiology , Single-Cell Analysis/methods , Support Vector Machine , Animals , Cell Line , Datasets as Topic , Humans , Intravital Microscopy/methods , Mice , Microscopy, Fluorescence/methods
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1428-1431, 2020 07.
Article in English | MEDLINE | ID: mdl-33018258

ABSTRACT

Segmentation of cell nuclei in fluorescence microscopy images provides valuable information about the shape and size of the nuclei, its chromatin texture and DNA content. It has many applications such as cell tracking, counting and classification. In this work, we extended our recently proposed approach for nuclei segmentation based on deep learning, by adding to its input handcrafted features. Our handcrafted features introduce additional domain knowledge that nuclei are expected to have an approximately round shape. For round shapes the gradient vector of points at the border point to the center. To convey this information, we compute a map of gradient convergence to be used by the CNN as a new channel, in addition to the fluorescence microscopy image. We applied our method to a dataset of microscopy images of cells stained with DAPI. Our results show that with this approach we are able to decrease the number of missdetections and, therefore, increase the F1-Score when compared to our previously proposed approach. Moreover, the results show that faster convergence is obtained when handcrafted features are combined with deep learning.


Subject(s)
Algorithms , Deep Learning , Cell Nucleus , Chromatin , Microscopy, Fluorescence
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1432-1435, 2020 07.
Article in English | MEDLINE | ID: mdl-33018259

ABSTRACT

The progression of cells through the cell cycle is a tightly regulated process and is known to be key in maintaining normal tissue architecture and function. Disruption of these orchestrated phases will result in alterations that can lead to many diseases including cancer. Regrettably, reliable automatic tools to evaluate the cell cycle stage of individual cells are still lacking, in particular at interphase. Therefore, the development of new tools for a proper classification are urgently needed and will be of critical importance for cancer prognosis and predictive therapeutic purposes. Thus, in this work, we aimed to investigate three deep learning approaches for interphase cell cycle staging in microscopy images: 1) joint detection and cell cycle classification of nuclei patches; 2) detection of cell nuclei patches followed by classification of the cycle stage; 3) detection and segmentation of cell nuclei followed by classification of cell cycle staging. Our methods were applied to a dataset of microscopy images of nuclei stained with DAPI. The best results (0.908 F1-Score) were obtained with approach 3 in which the segmentation step allows for an intensity normalization that takes into account the intensities of all nuclei in a given image. These results show that for a correct cell cycle staging it is important to consider the relative intensities of the nuclei. Herein, we have developed a new deep learning method for interphase cell cycle staging at single cell level with potential implications in cancer prognosis and therapeutic strategies.


Subject(s)
Cell Nucleus , Deep Learning , Cell Cycle , Cell Division , Interphase
10.
PLoS One ; 13(10): e0205513, 2018.
Article in English | MEDLINE | ID: mdl-30300393

ABSTRACT

PURPOSE: To characterize quantitative optical coherence tomography angiography (OCT-A) parameters in active neovascular age-related macular degeneration (nAMD) patients under treatment and remission nAMD patients. DESIGN: Retrospective, cross-sectional study. PARTICIPANTS: One hundred and four patients of whom 72 were in Group 1 (active nAMD) and 32 in Group 2 (remission nAMD) based on SD-OCT (Spectral Domain OCT) qualitative morphology. METHODS: This study was conducted at the Centre Ophtalmologique de l'Odeon between June 2016 and December 2017. Eyes were analyzed using SD-OCT and high-speed (100 000 A-scans/second) 1050-nm wavelength swept-source OCT-A. Speckle noise removal and choroidal neovascularization (CNV) blood flow delineation were automatically performed. Quantitative parameters analyzed included blood flow area (Area), vessel density, fractal dimension (FD) and lacunarity. OCT-A image algorithms and graphical user interfaces were built as a unified tool in Matlab coding language. Generalized Additive Models were used to study the association between OCT-A parameters and nAMD remission on structural OCT. The models' performance was assessed by the Akaike Information Criterion (AIC), Brier Score and by the area under the receiver operating characteristic curve (AUC). A p value of ≤ 0.05 was considered as statistically significant. RESULTS: Area, vessel density and FD were different (p<0.001) in the two groups. Regarding the association with CNV activity, Area alone had the highest AUC (AUC = 0.85; 95%CI: 0.77-0.93) followed by FD (AUC = 0.80; 95%CI: 0.71-0.88). Again, Area obtained the best values followed by FD in the AIC and Brier Score evaluations. The multivariate model that included both these variables attained the best performance considering all assessment criteria. CONCLUSIONS: Blood flow characteristics on OCT-A may be associated with exudative signs on structural OCT. In the future, analyses of OCT-A quantitative parameters could potentially help evaluate CNV activity status and to develop personalized treatment and follow-up cycles.


Subject(s)
Angiography , Choroidal Neovascularization/diagnostic imaging , Choroidal Neovascularization/therapy , Macular Degeneration/diagnostic imaging , Macular Degeneration/therapy , Tomography, Optical Coherence/methods , Aged, 80 and over , Angiography/methods , Choroidal Neovascularization/physiopathology , Cross-Sectional Studies , Eye/blood supply , Eye/diagnostic imaging , Eye/physiopathology , Female , Humans , Macular Degeneration/physiopathology , Male , Models, Statistical , Regional Blood Flow , Remission Induction , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...