Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Sci Data ; 11(1): 743, 2024 Jul 07.
Article in English | MEDLINE | ID: mdl-38972893

ABSTRACT

Machine learning-based systems have become instrumental in augmenting global efforts to combat cervical cancer. A burgeoning area of research focuses on leveraging artificial intelligence to enhance the cervical screening process, primarily through the exhaustive examination of Pap smears, traditionally reliant on the meticulous and labor-intensive analysis conducted by specialized experts. Despite the existence of some comprehensive and readily accessible datasets, the field is presently constrained by the limited volume of publicly available images and smears. As a remedy, our work unveils APACC (Annotated PAp cell images and smear slices for Cell Classification), a comprehensive dataset designed to bridge this gap. The APACC dataset features a remarkable array of images crucial for advancing research in this field. It comprises 103,675 annotated cell images, carefully extracted from 107 whole smears, which are further divided into 21,371 sub-regions for a more refined analysis. This dataset includes a vast number of cell images from conventional Pap smears and their specific locations on each smear, offering a valuable resource for in-depth investigation and study.


Subject(s)
Papanicolaou Test , Uterine Cervical Neoplasms , Humans , Female , Vaginal Smears , Machine Learning
2.
Sci Data ; 11(1): 733, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38971865

ABSTRACT

A simple and cheap way to recognize cervical cancer is using light microscopic analysis of Pap smear images. Training artificial intelligence-based systems becomes possible in this domain, e.g., to follow the European recommendation to screen negative smears to reduce false negative cases. The first step for such a process is segmenting the cells. A large and manually segmented dataset is required for this task, which can be used to train deep learning-based solutions. We describe a corresponding dataset with accurate manual segmentations for the enclosed cells. Altogether, the APACS23 (Annotated PAp smear images for Cell Segmentation 2023) dataset contains about 37 000 manually segmented cells and is separated into dedicated training and test parts, which could be used for an official benchmark of scientific investigations or a grand challenge.


Subject(s)
Papanicolaou Test , Uterine Cervical Neoplasms , Humans , Uterine Cervical Neoplasms/pathology , Female , Image Processing, Computer-Assisted/methods , Deep Learning , Vaginal Smears
3.
Sensors (Basel) ; 24(9)2024 May 04.
Article in English | MEDLINE | ID: mdl-38733032

ABSTRACT

Performing a minimally invasive surgery comes with a significant advantage regarding rehabilitating the patient after the operation. But it also causes difficulties, mainly for the surgeon or expert who performs the surgical intervention, since only visual information is available and they cannot use their tactile senses during keyhole surgeries. This is the case with laparoscopic hysterectomy since some organs are also difficult to distinguish based on visual information, making laparoscope-based hysterectomy challenging. In this paper, we propose a solution based on semantic segmentation, which can create pixel-accurate predictions of surgical images and differentiate the uterine arteries, ureters, and nerves. We trained three binary semantic segmentation models based on the U-Net architecture with the EfficientNet-b3 encoder; then, we developed two ensemble techniques that enhanced the segmentation performance. Our pixel-wise ensemble examines the segmentation map of the binary networks on the lowest level of pixels. The other algorithm developed is a region-based ensemble technique that takes this examination to a higher level and makes the ensemble based on every connected component detected by the binary segmentation networks. We also introduced and trained a classic multi-class semantic segmentation model as a reference and compared it to the ensemble-based approaches. We used 586 manually annotated images from 38 surgical videos for this research and published this dataset.


Subject(s)
Algorithms , Laparoscopy , Neural Networks, Computer , Ureter , Uterine Artery , Humans , Laparoscopy/methods , Female , Ureter/diagnostic imaging , Ureter/surgery , Uterine Artery/surgery , Uterine Artery/diagnostic imaging , Image Processing, Computer-Assisted/methods , Semantics , Hysterectomy/methods
4.
Materials (Basel) ; 16(10)2023 May 20.
Article in English | MEDLINE | ID: mdl-37241487

ABSTRACT

In this study, metal 3D printing technology was used to create lattice-shaped test specimens of orthopedic implants to determine the effect of different lattice shapes on bone ingrowth. Six different lattice shapes were used: gyroid, cube, cylinder, tetrahedron, double pyramid, and Voronoi. The lattice-structured implants were produced from Ti6Al4V alloy using direct metal laser sintering 3D printing technology with an EOS M290 printer. The implants were implanted into the femoral condyles of sheep, and the animals were euthanized 8 and 12 weeks after surgery. To determine the degree of bone ingrowth for different lattice-shaped implants, mechanical, histological, and image processing tests on ground samples and optical microscopic images were performed. In the mechanical test, the force required to compress the different lattice-shaped implants and the force required for a solid implant were compared, and significant differences were found in several instances. Statistically evaluating the results of our image processing algorithm, it was found that the digitally segmented areas clearly consisted of ingrown bone tissue; this finding is also supported by the results of classical histological processing. Our main goal was realized, so the bone ingrowth efficiencies of the six lattice shapes were ranked. It was found that the gyroid, double pyramid, and cube-shaped lattice implants had the highest degree of bone tissue growth per unit time. This ranking of the three lattice shapes remained the same at both 8 and 12 weeks after euthanasia. In accordance with the study, as a side project, a new image processing algorithm was developed that proved suitable for determining the degree of bone ingrowth in lattice implants from optical microscopic images. Along with the cube lattice shape, whose high bone ingrowth values have been previously reported in many studies, it was found that the gyroid and double pyramid lattice shapes produced similarly good results.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1452-1455, 2022 07.
Article in English | MEDLINE | ID: mdl-36083935

ABSTRACT

The classification of cells extracted from Pap-smears is in most cases done using neural network architectures. Nevertheless, the importance of features extracted with digital image processing is also discussed in many related articles. Decision support systems and automated analysis tools of Pap-smears often use these kinds of manually extracted, global features based on clinical expert opinion. In this paper, a solution is introduced where 29 different contextual features are combined with local features learned by a neural network so that it increases classification performance. The weight distribution between the features is also investigated leading to a conclusion that the numerical features are indeed forming an important part of the learning process. Furthermore, extensive testing of the presented methods is done using a dataset annotated by clinical experts. An increase of 3.2% in F1-Score value can be observed when using the combination of contextual and local features. Clinical Relevance - Analysis of images extracted from digital Pap-test using modern machine learning tools is discussed in many scientific papers. The manual classification of the cells can be time-consuming and expensive which requires a high amount of manual labor. Furthermore the result of the manual classification can also be uncertain due to interobserver variability. Considering these, any result that can lead to a more reliable highly accurate classification method is considered valuable in the field of cervical cancer screening.


Subject(s)
Early Detection of Cancer , Uterine Cervical Neoplasms , Female , Humans , Neural Networks, Computer , Papanicolaou Test/methods , Uterine Cervical Neoplasms/diagnosis , Vaginal Smears/methods
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2981-2984, 2021 11.
Article in English | MEDLINE | ID: mdl-34891871

ABSTRACT

The low number of annotated training images and class imbalance in the field of machine learning is a common problem that is faced in many applications. With this paper, we focus on a clinical dataset where cells were extracted in a previous research. Class imbalance can be experienced within this dataset since the normal cells are in a great majority in contrast to the abnormal ones. To address both problems, we present our idea of synthetic image generation using a custom variational autoencoder, that also enables the pretraining of the subsequent classifier network. Our method is compared with a performant solution, as well as presented with different modifications. We have experienced a performance increase of 4.52% regarding the classification of the abnormal cells.Clinical Relevance - We extract images from cervical smears, using digitized Pap test. When working with these kinds of smears, a single one can contain more than 10,000 cells. Examination of these is done manually by going over each cell individually. Our main goal is to make a system that can rank these samples by importance, thus making the process easier and more effective. The research that is described in this paper gets us a step closer to achieving our goal.


Subject(s)
Deep Learning , Female , Humans , Machine Learning , Papanicolaou Test , Vaginal Smears
7.
Orv Hetil ; 161(48): 2029-2036, 2020 11 29.
Article in Hungarian | MEDLINE | ID: mdl-33249410

ABSTRACT

Összefoglaló. Bevezetés: Jelenleg a méh méretének pontos megítélése meglehetosen szubjektív, az azt leíró ultrahangleletek igen nagy eltérést mutatnak. Számos klinikai szituációban azonban nagyon fontos az eltérések méretének, elhelyezkedésének, meghatározott anatómiai pontokhoz való viszonyának pontos leírása. Célkituzés: Célunk egy egységes mérési módszer kifejlesztése, mellyel sorvezetot adunk a vizsgálók kezébe, így csökkentve az egyéni variabilitásból adódó eltéréseket. A standardizált adatok lehetoséget adnak a szisztematikus gyujtésre, azok egységes feldolgozására, rendszerbe foglalására, tudományos értékelésére, segítséget nyújtva a mindennapi klinikai gyakorlatban és kutatásokban. Módszer: A méh általunk végzett ultrahangvizsgálatait, valamint a nemzetközi tanulmányokat alapul véve kívánunk javaslatot tenni egy egységes mérési módszer kialakítására, mellyel egyértelmu, pontos, reprodukálható adatokat kaphatunk a méhrol. Eredmények: Létrehoztunk egy standardizált paraméterekkel rendelkezo mérési eljárást Uteromap néven, melyet alkalmazva objektív méretadatokat kaphatunk a méh ultrahangvizsgálata során. Külön figyelmet fordítottunk arra, hogy az általunk létrehozni kívánt standardizált mérési eljárás alkalmas legyen minden általános, valamint speciális esetben is. A kipróbálás során a legelso 253 páciens adatait elemeztük retrospektív módon. Eredményeink szerint az idosebb életkor megnövekedett méhmagassággal és nagyobb hátsó falvastagsággal korrelált. Következtetés: Arra a következtetésre jutottunk, hogy standardizált mérési módszerünk alkalmazásával a méhrol és elváltozásairól sokkal pontosabb, objektívebb és egységesebb adatokat nyerhetünk anélkül, hogy a vizsgálathoz szükséges ido szignifikánsan hosszabb lenne. Munkánk folytatásaként minél több vizsgáló bevonásával szeretnénk a standardizált módszert a mindennapi gyakorlatra kiterjeszteni, a felmerülo igények, javaslatok alapján fejleszteni és létrehozni egy nemzetközileg elfogadott, standardizált mérési eljárást, mellyel az ultrahangvizsgálatok minoségét növelhetnénk, azzal a végso céllal, hogy javítsuk a betegek biztonságát és az ellátás eredményességét. Orv Hetil. 2020; 161(48): 2029-2036. INTRODUCTION: Currently, the accurate assessment of the size of the uterus is rather subjective as the related ultrasound findings show an immense difference. However, in several clinical situations it is crucial to accurately describe the size and location of abnormalities and their relationship to specific anatomical positions. OBJECTIVE: We aim to develop a unified measurement method that can serve as a guide for the examiners, thus reducing variances due to individual variability. Standardized data provide an opportunity for systematic collection, unified processing, systematization, and scientific evaluation, assisting in everyday clinical practice and research. METHOD: Based on our ultrasound examinations and the international studies, we propose a unified measurement method that can provide precise, accurate and reproducible data on the uterus. RESULTS: We have established a measurement procedure with standardized parameters called Uteromap, which obtained objective size data during the ultrasound examination of the uterus. Special attention was given to creating a standardized measurement procedure suitable for general and special cases, too. According to our results, older age was correlated with increased uterine height and greater posterior wall thickness. During the trial, the data of the first 253 patients were analyzed retrospectively. CONCLUSION: We concluded that our standardized measurement method could obtain more accurate, objective, and consistent data about the uterus and its lesions without significantly increasing the time of the examination. Continuing our work, we would like to extend the standardized method to everyday practice, develop and create an internationally accepted standardized measurement procedure based on the emerging needs and recommendations, with the ultimate aim of improving patient safety and effectiveness of care. Orv Hetil. 2020; 161(48): 2029-2036.


Subject(s)
Ultrasonography/standards , Uterus/diagnostic imaging , Adult , Age Factors , Aged , Female , Humans , Middle Aged , Retrospective Studies
8.
Med Image Anal ; 59: 101561, 2020 01.
Article in English | MEDLINE | ID: mdl-31671320

ABSTRACT

Diabetic Retinopathy (DR) is the most common cause of avoidable vision loss, predominantly affecting the working-age population across the globe. Screening for DR, coupled with timely consultation and treatment, is a globally trusted policy to avoid vision loss. However, implementation of DR screening programs is challenging due to the scarcity of medical professionals able to screen a growing global diabetic population at risk for DR. Computer-aided disease diagnosis in retinal image analysis could provide a sustainable approach for such large-scale screening effort. The recent scientific advances in computing capacity and machine learning approaches provide an avenue for biomedical scientists to reach this goal. Aiming to advance the state-of-the-art in automatic DR diagnosis, a grand challenge on "Diabetic Retinopathy - Segmentation and Grading" was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI - 2018). In this paper, we report the set-up and results of this challenge that is primarily based on Indian Diabetic Retinopathy Image Dataset (IDRiD). There were three principal sub-challenges: lesion segmentation, disease severity grading, and localization of retinal landmarks and segmentation. These multiple tasks in this challenge allow to test the generalizability of algorithms, and this is what makes it different from existing ones. It received a positive response from the scientific community with 148 submissions from 495 registrations effectively entered in this challenge. This paper outlines the challenge, its organization, the dataset used, evaluation methods and results of top-performing participating solutions. The top-performing approaches utilized a blend of clinical information, data augmentation, and an ensemble of models. These findings have the potential to enable new developments in retinal image analysis and image-based DR screening in particular.


Subject(s)
Deep Learning , Diabetic Retinopathy/diagnostic imaging , Diagnosis, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Photography , Datasets as Topic , Humans , Pattern Recognition, Automated
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2699-2702, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946452

ABSTRACT

Diabetic retinopathy (DR) and especially diabetic macular edema (DME) are common causes of vision loss as complications of diabetes. In this work, we consider an ensemble that organizes a convolutional neural network (CNN) and traditional hand-crafted features into a single architecture for retinal image classification. This approach allows the joint training of a CNN and the fine-tuning of the weights of handcrafted features to provide a final prediction. Our solution is dedicated to the automatic classification of fundus images according to the severity level of DR and DME. For an objective evaluation of our approach, we have tested its performance on the official test datasets of the IEEE International Symposium on Biomedical Imaging (ISBI) 2018 Challenge 2: Diabetic Retinopathy Segmentation and Grading Challenge, section B. Disease Grading: Classification of fundus images according to the severity level of diabetic retinopathy and diabetic macular edema. As for our experimental results based on testing on the Indian Diabetic Retinopathy Image Dataset (IDRiD), the classification accuracies have been found to be 90.07% for the 5-class DR challenge, and 96.85% for the 3-class DME one.


Subject(s)
Fundus Oculi , Diabetic Retinopathy , Hand , Humans , Macular Edema , Neural Networks, Computer
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 3705-3708, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30441176

ABSTRACT

Microaneurysms (MAs) are common signsof several diseases, appearing as small circular darkish spots in color fundus images. The presence of even a single MA may suggest diseases (e.g. diabetic retinopathy), thus, their reliable recognition is a critical issue in both human clinical practice and computer-aided systems. As for their automatic recognition, deep learning techniques became very popular in the recent years. In this paper, we also apply such deep convolutional neural network (DCNN) based techniques; however, we organize them into a supernetwork with a fusionbased approach. The combination of the member DCNNs is achieved with interconnecting them in a joint fully-connected layer. The advantage of the method is that this large architecture can be trained as a single neural network, and thus, the member DCNNs are also trained with taking the predictions of the other members into consideration. The competitiveness of our approach is also validated with experimental studies, where the ensemble-based system outperformed each member DCNN. As a primary application domain with strong clinical motivation, the methodology was tested for image-level classification. More specifically, a retinal image is divided into subimages to provide the required inputs for the DCNN-based architecture, and the whole image is labeled as a positive case, if the presence of MA is predicted in any of the subimages. Additionally, we also demonstrate how our architecture can be trained to accurately localize MAs with training only the local neighborhoods of the lesions; empirical tests showing solid performance are also enclosed.


Subject(s)
Diabetic Retinopathy , Microaneurysm , Deep Learning , Fundus Oculi , Humans , Neural Networks, Computer
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 49-52, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440338

ABSTRACT

In the past decades, the number of in vitro fertilization (IVF) procedures for the conception of a child has been rising continuously, however, the success rate of artificial insemination remained low. According to current statistics, large portion of unsuccessful IVF relates to some women' factors. As the directly related female organ, the proper investigation of the uterus has primary importance. Namely, visible markers may indicate inflammations or other negative effects that jeopardize successful implantation. The purpose of this study is to support the observability of the uterus from this aspect by providing computer-aided tools for the extraction of its wall from video hysteroscopy. As for methodology, fully convolutional neural networks (FCNNs) are used for the automatic segmentation of the video frames to determine the region of interest. We provide the necessary steps for the applicability of the general deep learning framework for this specific task. Moreover, we increase segmentation accuracy with applying ensemble-based approaches at two levels. First, the predictions of a given FCNN are aggregated for the overlapping regions of subimages, which are derived from the splitting of the original images. Next, the segmentation results of different FCNNs are fused via a weighted combination model; optimization for adjusting the weights are also provided. Based on our experimental results, we have achieved 91.56% segmentation accuracy regarding the recognition of the uterus wall.


Subject(s)
Image Processing, Computer-Assisted , Uterus , Female , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Reproductive Techniques, Assisted , Uterus/anatomy & histology , Uterus/diagnostic imaging
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2018: 2575-2578, 2018 Jul.
Article in English | MEDLINE | ID: mdl-30440934

ABSTRACT

Skin cancer is among the deadliest variants of cancer if not recognized and treated in time. This work focuses on the identification of this disease using an ensemble of state-of-the-art deep learning approaches. More specifically, we propose the aggregation of robust convolutional neural networks (CNNs) into one neural net architecture, where the final classification is achieved based on the weighted output of the member CNNs. Since our framework is realized within a single neural net architecture, all the parameters of the member CNNs and the weights applied in the fusion can be determined by backpropagation routinely applied for such tasks. The presented ensemble consists of the CNNs AlexNet, VGGNet, GoogLeNet, all of which have been won in subsequent years the most prominent worldwide image classification challenge ImageNet. For an objective evaluation of our approach, we have tested its performance on the official test database of the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 challenge on Skin Lesion Analysis Towards Melanoma Detection dedicated to skin cancer recognition. Our experimental studies show that the proposed approach is competitive in this field. Moreover, the ensemble-based approach outperformed all of its member CNNs.


Subject(s)
Skin Diseases , Deep Learning , Humans , Melanoma , Neural Networks, Computer , Skin Neoplasms
13.
J Biomed Inform ; 86: 25-32, 2018 10.
Article in English | MEDLINE | ID: mdl-30103029

ABSTRACT

Skin cancer is a major public health problem with over 123,000 newly diagnosed cases worldwide in every year. Melanoma is the deadliest form of skin cancer, responsible for over 9000 deaths in the United States each year. Thus, reliable automatic melanoma screening systems would provide a great help for clinicians to detect the malignant skin lesions as early as possible. In the last five years, the efficiency of deep learning-based methods increased dramatically and their performances seem to outperform conventional image processing methods in classification tasks. However, this type of machine learning-based approaches have a main drawback, namely they require thousands of labeled images per classes for their training. In this paper, we investigate how we can create an ensemble of deep convolutional neural networks to improve further their individual accuracies in the task of classifying dermoscopy images into the three classes melanoma, nevus, and seborrheic keratosis when we have no opportunity to train them on adequate number of annotated images. To achieve high classification accuracy, we fuse the outputs of the classification layers of four different deep neural network architectures. More specifically, we propose the aggregation of robust convolutional neural networks (CNNs) into one framework, where the final classification is achieved based on the weighted output of the member CNNs. For aggregation, we consider different fusion-based methods and select the best performing one for this problem. Our experimental results also prove that the creation of an ensemble of different neural networks is a meaningful approach, since each of the applied fusion strategies outperforms the individual networks regarding classification accuracy. The average area under the receiver operating characteristic curve has been found to be 0.891 for the 3-class classification task. For an objective evaluation of our approach, we have tested its performance on the official test database of the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 challenge on Skin Lesion Analysis Towards Melanoma Detection dedicated to skin cancer recognition.


Subject(s)
Dermoscopy/methods , Diagnosis, Computer-Assisted/methods , Melanoma/diagnostic imaging , Neural Networks, Computer , Skin Neoplasms/diagnostic imaging , Algorithms , Databases, Factual , Humans , Image Processing, Computer-Assisted/methods , Keratosis, Seborrheic/diagnostic imaging , Machine Learning , Melanocytes/pathology , Nevus/diagnostic imaging , ROC Curve
14.
Gynecol Obstet Invest ; 83(6): 615-619, 2018.
Article in English | MEDLINE | ID: mdl-29975937

ABSTRACT

AIMS: The study aimed to determine the accuracy of deep neural network in identifying the plane between myoma and normal myometrium. METHODS: On the images of surgery, different structures were signed and annotated for the training phase. After the appropriate training of the deep neural network with 4,688 images from that training set, 1,600 formerly unseen images were used for testing. Indication for surgery was heavy menstrual bleeding and hysteroscopic finding was submucous fibroid. Operative intervention was fibroid resection. Recorded videos of transcervical resection of myoma in 13 cases were used for the study. Different filters and procedures were applied by the fully convolutional neural network (FCNN) for identifying previously annotated structures. RESULTS: Previously manually annotated images and the manually drawn bitmasks were used for training the applied FCNN and then this pre-trained network was used for automatic segmentation of normal myometrium in an unseen video frame. The segmentation pixel-wise accuracy achieved the 86.19% considering the Hausdorff metric. CONCLUSION: Using deep learning technique in analyzing process of endoscopic video frame could help in real-time identification of structures while performing endoscopic surgery.


Subject(s)
Hysteroscopy/methods , Image Processing, Computer-Assisted/methods , Leiomyoma/pathology , Myometrium/pathology , Neural Networks, Computer , Adult , Female , Humans , Leiomyoma/surgery , Myometrium/surgery , Predictive Value of Tests , Retrospective Studies , Uterine Myomectomy/methods
15.
Comput Biol Med ; 65: 10-24, 2015 Oct 01.
Article in English | MEDLINE | ID: mdl-26259029

ABSTRACT

In this paper, we propose a combination method for the automatic detection of the optic disc (OD) in fundus images based on ensembles of individual algorithms. We have studied and adapted some of the state-of-the-art OD detectors and finally organized them into a complex framework in order to maximize the accuracy of the localization of the OD. The detection of the OD can be considered as a single-object detection problem. This object can be localized with high accuracy by several algorithms extracting single candidates for the center of the OD and the final location can be defined using a single majority voting rule. To include more information to support the final decision, we can use member algorithms providing more candidates which can be ranked based on the confidence ordered by the algorithms. In this case, a spatial weighted graph is defined where the candidates are considered as its nodes, and the final OD position is determined in terms of finding a maximum-weighted clique. Now, we examine how to apply in our ensemble-based framework all the accessible information supplied by the member algorithms by making them return confidence values for each image pixel. These confidence values inform us about the probability that a given pixel is the center point of the object. We apply axiomatic and Bayesian approaches, as in the case of aggregation of judgments of experts in decision and risk analysis, to combine these confidence values. According to our experimental study, the accuracy of the localization of OD increases further. Besides single localization, this approach can be adapted for the precise detection of the boundary of the OD. Comparative experimental results are also given for several publicly available datasets.


Subject(s)
Algorithms , Fundus Oculi , Image Processing, Computer-Assisted/methods , Models, Theoretical , Optic Disk , Databases, Factual , Female , Humans , Image Processing, Computer-Assisted/instrumentation , Male , Probability Theory
16.
Comput Biol Med ; 54: 156-71, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25255154

ABSTRACT

In this paper, we propose a method for the automatic detection of exudates in digital fundus images. Our approach can be divided into three stages: candidate extraction, precise contour segmentation and the labeling of candidates as true or false exudates. For candidate detection, we borrow a grayscale morphology-based method to identify possible regions containing these bright lesions. Then, to extract the precise boundary of the candidates, we introduce a complex active contour-based method. Namely, to increase the accuracy of segmentation, we extract additional possible contours by taking advantage of the diverse behavior of different pre-processing methods. After selecting an appropriate combination of the extracted contours, a region-wise classifier is applied to remove the false exudate candidates. For this task, we consider several region-based features, and extract an appropriate feature subset to train a Naïve-Bayes classifier optimized further by an adaptive boosting technique. Regarding experimental studies, the method was tested on publicly available databases both to measure the accuracy of the segmentation of exudate regions and to recognize their presence at image-level. In a proper quantitative evaluation on publicly available datasets the proposed approach outperformed several state-of-the-art exudate detector algorithms.


Subject(s)
Artificial Intelligence , Diabetic Retinopathy/pathology , Exudates and Transudates/cytology , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Retinoscopy/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
17.
Article in English | MEDLINE | ID: mdl-25569914

ABSTRACT

Diabetic retinopathy (DR) is one of the most common causing of vision loss in developed countries. In early stage of DR, some signs like exudates appear in the retinal images. An automatic screening system must be capable to detect these signs properly so that the treatment of the patients may begin in time. The appearance of exudates shows a rich variety regarding their shape and size making automatic detection more challenging. We propose a way for the automatic segmentation of exudates consisting of a candidate extraction step followed by exact contour detection and region-wise classification. More specifically, we extract possible exudate candidates using grayscale morphology and their proper shape is determined by a Markovian segmentation model considering edge information. Finally, we label the candidates as true or false ones by an optimally adjusted SVM classifier. For testing purposes, we considered the publicly available database DiaretDB1, where the proposed method outperformed several state-of-the-art exudate detectors.


Subject(s)
Diabetic Retinopathy/diagnosis , Fundus Oculi , Image Interpretation, Computer-Assisted , Diabetic Retinopathy/pathology , Exudates and Transudates , Humans , Markov Chains , Software , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...