Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Med Biol Eng Comput ; 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38969811

ABSTRACT

Retinal image registration is of utmost importance due to its wide applications in medical practice. In this context, we propose ConKeD, a novel deep learning approach to learn descriptors for retinal image registration. In contrast to current registration methods, our approach employs a novel multi-positive multi-negative contrastive learning strategy that enables the utilization of additional information from the available training samples. This makes it possible to learn high-quality descriptors from limited training data. To train and evaluate ConKeD, we combine these descriptors with domain-specific keypoints, particularly blood vessel bifurcations and crossovers, that are detected using a deep neural network. Our experimental results demonstrate the benefits of the novel multi-positive multi-negative strategy, as it outperforms the widely used triplet loss technique (single-positive and single-negative) as well as the single-positive multi-negative alternative. Additionally, the combination of ConKeD with the domain-specific keypoints produces comparable results to the state-of-the-art methods for retinal image registration, while offering important advantages such as avoiding pre-processing, utilizing fewer training samples, and requiring fewer detected keypoints, among others. Therefore, ConKeD shows a promising potential towards facilitating the development and application of deep learning-based methods for retinal image registration.

2.
Heliyon ; 10(3): e25367, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38327447

ABSTRACT

Water quality can be negatively affected by the presence of some toxic phytoplankton species, whose toxins are difficult to remove by conventional purification systems. This creates the need for periodic analyses, which are nowadays manually performed by experts. These labor-intensive processes are affected by subjectivity and expertise, causing unreliability. Some automatic systems have been proposed to address these limitations. However, most of them are based on classical image processing pipelines with not easily scalable designs. In this context, deep learning techniques are more adequate for the detection and recognition of phytoplankton specimens in multi-specimen microscopy images, as they integrate both tasks in a single end-to-end trainable module that is able to automatize the adaption to such a complex domain. In this work, we explore the use of two different object detectors: Faster R-CNN and RetinaNet, from the one-stage and two-stage paradigms respectively. We use a dataset composed of multi-specimen microscopy images captured using a systematic protocol. This allows the use of widely available optical microscopes, also avoiding manual adjustments on a per-specimen basis, which would require expert knowledge. We have made our dataset publicly available to improve the reproducibility and to foment the development of new alternatives in the field. The selected Faster R-CNN methodology reaches maximum recall levels of 95.35%, 84.69%, and 79.81%, and precisions of 94.68%, 89.30% and 82.61%, for W. naegeliana, A. spiroides, and D. sociale, respectively. The system is able to adapt to the dataset problems and improves the results overall with respect to the reference state-of-the-art work. In addition, the proposed system improves the automation and abstraction from the domain and simplifies the workflow and adjustment.

3.
Quant Imaging Med Surg ; 13(7): 4540-4562, 2023 Jul 01.
Article in English | MEDLINE | ID: mdl-37456305

ABSTRACT

Background: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. Methods: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. Results: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method's parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). Conclusions: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.

4.
Biomed Opt Express ; 14(7): 3726-3747, 2023 Jul 01.
Article in English | MEDLINE | ID: mdl-37497506

ABSTRACT

Optical coherence tomography (OCT) is the most widely used imaging modality in ophthalmology. There are multiple variations of OCT imaging capable of producing complementary information. Thus, registering these complementary volumes is desirable in order to combine their information. In this work, we propose a novel automated pipeline to register OCT images produced by different devices. This pipeline is based on two steps: a multi-modal 2D en-face registration based on deep learning, and a Z-axis (axial axis) registration based on the retinal layer segmentation. We evaluate our method using data from a Heidelberg Spectralis and an experimental PS-OCT device. The empirical results demonstrated high-quality registrations, with mean errors of approximately 46 µm for the 2D registration and 9.59 µm for the Z-axis registration. These registrations may help in multiple clinical applications such as the validation of layer segmentations among others.

5.
Comput Biol Med ; 140: 105101, 2021 Dec 03.
Article in English | MEDLINE | ID: mdl-34875412

ABSTRACT

Medical imaging, and particularly retinal imaging, allows to accurately diagnose many eye pathologies as well as some systemic diseases such as hypertension or diabetes. Registering these images is crucial to correctly compare key structures, not only within patients, but also to contrast data with a model or among a population. Currently, this field is dominated by complex classical methods because the novel deep learning methods cannot compete yet in terms of results and commonly used methods are difficult to adapt to the retinal domain. In this work, we propose a novel method to register color fundus images based on previous works which employed classical approaches to detect domain-specific landmarks. Instead, we propose to use deep learning methods for the detection of these highly-specific domain-related landmarks. Our method uses a neural network to detect the bifurcations and crossovers of the retinal blood vessels, whose arrangement and location are unique to each eye and person. This proposal is the first deep learning feature-based registration method in fundus imaging. These keypoints are matched using a method based on RANSAC (Random Sample Consensus) without the requirement to calculate complex descriptors. Our method was tested using the public FIRE dataset, although the landmark detection network was trained using the DRIVE dataset. Our method provides accurate results, a registration score of 0.657 for the whole FIRE dataset (0.908 for category S, 0.293 for category P and 0.660 for category A). Therefore, our proposal can compete with complex classical methods and beat the deep learning methods in the state of the art.

6.
Comput Methods Programs Biomed ; 200: 105923, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33486341

ABSTRACT

BACKGROUND AND OBJECTIVE: The proliferation of toxin-producing phytoplankton species can compromise the quality of the water sources. This contamination is difficult to detect, and consequently to be neutralised, since normal water purification techniques are ineffective. Currently, the water analyses about phytoplankton are commonly performed by the specialists with manual routine analyses, which represents a major limitation. The adequate identification and classification of phytoplankton specimens requires intensive training and expertise. Additionally, the performed analysis involves a lengthy process that exhibits serious problems of reliability and repeatability as inter-expert agreement is not always reached. Considering all those factors, the automatization of these analyses is, therefore, highly desirable to reduce the workload of the specialists and facilitate the process. METHODS: This manuscript proposes a novel fully automatic methodology to perform phytoplankton analyses in digital microscopy images of water samples taken with a regular light microscope. In particular, we propose a method capable of analysing multi-specimen images acquired using a simplified systematic protocol. In contrast with prior approaches, this enables its use without the necessity of an expert taxonomist operating the microscope. The system is able to detect and segment the different existing phytoplankton specimens, with high variability in terms of visual appearances, and to merge them into colonies and sparse specimens when necessary. Moreover, the system is capable of differentiating them from other similar objects like zooplankton, detritus or mineral particles, among others, and then classify the specimens into defined target species of interest using a machine learning-based approach. RESULTS: The proposed system provided satisfactory and accurate results in every step. The detection step provided a FNR of 0.4%. Phytoplankton detection, that is, differentiating true phytoplankton from similar objects (zooplankton, minerals, etc.), provided a result of 84.07% of precision at 90% of recall. The target species classification, reported an overall accuracy of 87.50%. The recall levels for each species are, 81.82% for W. naegeliana, 57.15% for A. spiroides, 85.71% for D. sociale and 95% for the "Other" group, a set of relevant toxic and interesting species widely spread over the samples. CONCLUSIONS: The proposed methodology provided accurate results in all the designed steps given the complexity of the problem, particularly in terms of specimen identification, phytoplankton differentiation as well as the classification of the defined target species. Therefore, this fully automatic system represents a robust and consistent tool to aid the specialists in the analysis of the quality of the water sources and potability.


Subject(s)
Microscopy , Phytoplankton , Machine Learning , Reproducibility of Results , Water
7.
Sensors (Basel) ; 20(22)2020 Nov 23.
Article in English | MEDLINE | ID: mdl-33238566

ABSTRACT

Water safety and quality can be compromised by the proliferation of toxin-producing phytoplankton species, requiring continuous monitoring of water sources. This analysis involves the identification and counting of these species which requires broad experience and knowledge. The automatization of these tasks is highly desirable as it would release the experts from tedious work, eliminate subjective factors, and improve repeatability. Thus, in this preliminary work, we propose to advance towards an automatic methodology for phytoplankton analysis in digital images of water samples acquired using regular microscopes. In particular, we propose a novel and fully automatic method to detect and segment the existent phytoplankton specimens in these images using classical computer vision algorithms. The proposed method is able to correctly detect sparse colonies as single phytoplankton candidates, thanks to a novel fusion algorithm, and is able to differentiate phytoplankton specimens from other image objects in the microscope samples (like minerals, bubbles or detritus) using a machine learning based approach that exploits texture and colour features. Our preliminary experiments demonstrate that the proposed method provides satisfactory and accurate results.


Subject(s)
Environmental Monitoring/methods , Image Processing, Computer-Assisted , Microscopy , Phytoplankton , Algorithms , Fresh Water , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...