Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 5.228
Filter
1.
J Vis ; 24(5): 1, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38691088

ABSTRACT

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Subject(s)
Color Perception , Fruit , Paintings , Photography , Humans , Color Perception/physiology , Photography/methods , Color , Contrast Sensitivity/physiology
2.
Transl Vis Sci Technol ; 13(5): 23, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38809531

ABSTRACT

Purpose: To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods: This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results: Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions: Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance: This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.


Subject(s)
Axial Length, Eye , Neural Networks, Computer , Photography , Humans , Male , Female , Middle Aged , Adult , Photography/methods , Aged , Axial Length, Eye/diagnostic imaging , Fundus Oculi , Young Adult , Aged, 80 and over
3.
Sensors (Basel) ; 24(9)2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38732872

ABSTRACT

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Subject(s)
Algorithms , Exercise , Wearable Electronic Devices , Humans , Exercise/physiology , Image Processing, Computer-Assisted/methods , Photography/instrumentation , Photography/methods , Delivery of Health Care
4.
Transl Vis Sci Technol ; 13(5): 20, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38780955

ABSTRACT

Purpose: We sough to develop an automatic method of quantifying optic disc pallor in fundus photographs and determine associations with peripapillary retinal nerve fiber layer (pRNFL) thickness. Methods: We used deep learning to segment the optic disc, fovea, and vessels in fundus photographs, and measured pallor. We assessed the relationship between pallor and pRNFL thickness derived from optical coherence tomography scans in 118 participants. Separately, we used images diagnosed by clinical inspection as pale (n = 45) and assessed how measurements compared with healthy controls (n = 46). We also developed automatic rejection thresholds and tested the software for robustness to camera type, image format, and resolution. Results: We developed software that automatically quantified disc pallor across several zones in fundus photographs. Pallor was associated with pRNFL thickness globally (ß = -9.81; standard error [SE] = 3.16; P < 0.05), in the temporal inferior zone (ß = -29.78; SE = 8.32; P < 0.01), with the nasal/temporal ratio (ß = 0.88; SE = 0.34; P < 0.05), and in the whole disc (ß = -8.22; SE = 2.92; P < 0.05). Furthermore, pallor was significantly higher in the patient group. Last, we demonstrate the analysis to be robust to camera type, image format, and resolution. Conclusions: We developed software that automatically locates and quantifies disc pallor in fundus photographs and found associations between pallor measurements and pRNFL thickness. Translational Relevance: We think our method will be useful for the identification, monitoring, and progression of diseases characterized by disc pallor and optic atrophy, including glaucoma, compression, and potentially in neurodegenerative disorders.


Subject(s)
Deep Learning , Nerve Fibers , Optic Disk , Photography , Software , Tomography, Optical Coherence , Humans , Optic Disk/diagnostic imaging , Optic Disk/pathology , Tomography, Optical Coherence/methods , Male , Female , Middle Aged , Nerve Fibers/pathology , Photography/methods , Adult , Retinal Ganglion Cells/pathology , Retinal Ganglion Cells/cytology , Aged , Optic Nerve Diseases/diagnostic imaging , Optic Nerve Diseases/diagnosis , Optic Nerve Diseases/pathology , Fundus Oculi
5.
Meat Sci ; 213: 109500, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38582006

ABSTRACT

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Subject(s)
Adipose Tissue , Color , Muscle, Skeletal , Photography , Red Meat , Animals , Australia , Cattle , Red Meat/analysis , Red Meat/standards , Photography/methods , Calibration , Phenotype , Reproducibility of Results , Ribs
6.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Article in English | MEDLINE | ID: mdl-38475784

ABSTRACT

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Subject(s)
Community Health Services , Glaucoma , Humans , Cross-Sectional Studies , Prospective Studies , China , Photography/methods , Fundus Oculi
7.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38527337

ABSTRACT

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Subject(s)
Mobile Applications , Plastic Surgery Procedures , Surgery, Plastic , Humans , United States , Smartphone , Photography/methods
8.
Retina ; 44(6): 1092-1099, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38320305

ABSTRACT

PURPOSE: To observe the diagnostic value of multispectral fundus imaging (MSI) in hypertensive retinopathy (HR). METHODS: A total of 100 patients with HR were enrolled in this cross-sectional study, and all participants received fundus photography and MSI. Participants with severe HR received fundus fluorescein angiography (FFA). The diagnostic consistency between fundus photography and MSI in the diagnosis of HR was calculated. The sensitivity of MSI in the diagnosis of severe HR was calculated by comparison with FFA. Choroidal vascular index was calculated in patients with HR using MSI at 780 nm. RESULTS: MSI and fundus photography were highly concordant in the diagnosis of HR with a Kappa value = 0.883. MSI had a sensitivity of 96% in diagnosing retinal hemorrhage, a sensitivity of 89.47% in diagnosing retinal exudation, a sensitivity of 100% in diagnosing vascular compression indentation, and a sensitivity of 96.15% in diagnosing retinal arteriosclerosis. The choroidal vascular index of the patients in the HR group was significantly lower than that of the control group, whereas there was no significant difference between the affected and fellow eyes. CONCLUSION: As a noninvasive modality of observation, MSI may be a new tool for the diagnosis and assessment of HR.


Subject(s)
Fluorescein Angiography , Fundus Oculi , Hypertensive Retinopathy , Humans , Cross-Sectional Studies , Female , Male , Middle Aged , Fluorescein Angiography/methods , Hypertensive Retinopathy/diagnosis , Aged , Adult , Photography/methods , Retinal Vessels/diagnostic imaging , Retinal Vessels/pathology
9.
Ophthalmic Surg Lasers Imaging Retina ; 55(5): 263-269, 2024 May.
Article in English | MEDLINE | ID: mdl-38408222

ABSTRACT

BACKGROUND AND OBJECTIVE: Color fundus photography is an important imaging modality that is currently limited by a narrow dynamic range. We describe a post-image processing technique to generate high dynamic range (HDR) retinal images with enhanced detail. PATIENTS AND METHODS: This was a retrospective, observational case series evaluating fundus photographs of patients with macular pathology. Photographs were acquired with three or more exposure values using a commercially available camera (Topcon 50-DX). Images were aligned and imported into HDR processing software (Photomatix Pro). Fundus detail was compared between HDR and raw photographs. RESULTS: Sixteen eyes from 10 patients (5 male, 5 female; mean age 59.4 years) were analyzed. Clinician graders preferred the HDR image 91.7% of the time (44/48 image comparisons), with good grader agreement (81.3%, 13/16 eyes). CONCLUSIONS: HDR fundus imaging is feasible using images from existing fundus cameras and may be useful for enhanced visualization of retinal detail in a variety of pathologic states. [Ophthalmic Surg Lasers Imaging Retina 2024;55:263-269.].


Subject(s)
Fundus Oculi , Photography , Humans , Female , Retrospective Studies , Male , Middle Aged , Photography/methods , Aged , Retinal Diseases/diagnosis , Image Processing, Computer-Assisted/methods , Adult , Retina/diagnostic imaging , Retina/pathology , Diagnostic Techniques, Ophthalmological
10.
Behav Res Methods ; 56(4): 3861-3872, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38332413

ABSTRACT

Over the last 40 years, object recognition studies have moved from using simple line drawings, to more detailed illustrations, to more ecologically valid photographic representations. Researchers now have access to various stimuli sets, however, existing sets lack the ability to independently manipulate item format, as the concepts depicted are unique to the set they derive from. To enable such comparisons, Rossion and Pourtois (2004) revisited Snodgrass and Vanderwart's (1980) line drawings and digitally re-drew the objects, adding texture and shading. In the current study, we took this further and created a set of stimuli that showcase the same objects in photographic form. We selected six photographs of each object (three color/three grayscale) and collected normative data and RTs. Naming accuracy and agreement was high for all photographs and appeared to steadily increase with format distinctiveness. In contrast to previous data patterns for drawings, naming agreement (H values) did not differ between grey and color photographs, nor did familiarity ratings. However, grey photographs received significantly lower mental imagery agreement and visual complexity scores than color photographs. This suggests that, in comparison to drawings, the ecological nature of photographs may facilitate deeper critical evaluation of whether they offer a good match to a mental representation. Color may therefore play a more vital role in photographs than in drawings, aiding participants in judging the match with their mental representation. This new photographic stimulus set and corresponding normative data provide valuable materials for a wide range of experimental studies of object recognition.


Subject(s)
Pattern Recognition, Visual , Photic Stimulation , Photography , Recognition, Psychology , Humans , Male , Female , Photography/methods , Recognition, Psychology/physiology , Pattern Recognition, Visual/physiology , Adult , Reaction Time/physiology , Young Adult , Adolescent
11.
Int Ophthalmol ; 44(1): 41, 2024 Feb 09.
Article in English | MEDLINE | ID: mdl-38334896

ABSTRACT

Diabetic retinopathy (DR) is the leading global cause of vision loss, accounting for 4.8% of global blindness cases as estimated by the World Health Organization (WHO). Fundus photography is crucial in ophthalmology as a diagnostic tool for capturing retinal images. However, resource and infrastructure constraints limit access to traditional tabletop fundus cameras in developing countries. Additionally, these conventional cameras are expensive, bulky, and not easily transportable. In contrast, the newer generation of handheld and smartphone-based fundus cameras offers portability, user-friendliness, and affordability. Despite their potential, there is a lack of comprehensive review studies examining the clinical utilities of these handheld (e.g. Zeiss Visuscout 100, Volk Pictor Plus, Volk Pictor Prestige, Remidio NMFOP, FC161) and smartphone-based (e.g. D-EYE, iExaminer, Peek Retina, Volk iNview, Volk Vistaview, oDocs visoScope, oDocs Nun, oDocs Nun IR) fundus cameras. This review study aims to evaluate the feasibility and practicality of these available handheld and smartphone-based cameras in medical settings, emphasizing their advantages over traditional tabletop fundus cameras. By highlighting various clinical settings and use scenarios, this review aims to fill this gap by evaluating the efficiency, feasibility, cost-effectiveness, and remote capabilities of handheld and smartphone fundus cameras, ultimately enhancing the accessibility of ophthalmic services.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Eye Diseases , Humans , Diabetic Retinopathy/diagnosis , Smartphone , Fundus Oculi , Retina , Eye Diseases/diagnosis , Photography/methods , Blindness
12.
Burns ; 50(4): 966-979, 2024 May.
Article in English | MEDLINE | ID: mdl-38331663

ABSTRACT

AIM: This study was conducted to determine the segmentation, classification, object detection, and accuracy of skin burn images using artificial intelligence and a mobile application. With this study, individuals were able to determine the degree of burns and see how to intervene through the mobile application. METHODS: This research was conducted between 26.10.2021-01.09.2023. In this study, the dataset was handled in two stages. In the first stage, the open-access dataset was taken from https://universe.roboflow.com/, and the burn images dataset was created. In the second stage, in order to determine the accuracy of the developed system and artificial intelligence model, the patients admitted to the hospital were identified with our own design Burn Wound Detection Android application. RESULTS: In our study, YOLO V7 architecture was used for segmentation, classification, and object detection. There are 21018 data in this study, and 80% of them are used as training data, and 20% of them are used as test data. The YOLO V7 model achieved a success rate of 75.12% on the test data. The Burn Wound Detection Android mobile application that we developed in the study was used to accurately detect images of individuals. CONCLUSION: In this study, skin burn images were segmented, classified, object detected, and a mobile application was developed using artificial intelligence. First aid is crucial in burn cases, and it is an important development for public health that people living in the periphery can quickly determine the degree of burn through the mobile application and provide first aid according to the instructions of the mobile application.


Subject(s)
Artificial Intelligence , Burns , Mobile Applications , Burns/classification , Burns/diagnostic imaging , Burns/pathology , Humans , Photography/methods
14.
BMC Med Inform Decis Mak ; 24(1): 25, 2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38273286

ABSTRACT

BACKGROUND: The epiretinal membrane (ERM) is a common retinal disorder characterized by abnormal fibrocellular tissue at the vitreomacular interface. Most patients with ERM are asymptomatic at early stages. Therefore, screening for ERM will become increasingly important. Despite the high prevalence of ERM, few deep learning studies have investigated ERM detection in the color fundus photography (CFP) domain. In this study, we built a generative model to enhance ERM detection performance in the CFP. METHODS: This deep learning study retrospectively collected 302 ERM and 1,250 healthy CFP data points from a healthcare center. The generative model using StyleGAN2 was trained using single-center data. EfficientNetB0 with StyleGAN2-based augmentation was validated using independent internal single-center data and external datasets. We randomly assigned healthcare center data to the development (80%) and internal validation (20%) datasets. Data from two publicly accessible sources were used as external validation datasets. RESULTS: StyleGAN2 facilitated realistic CFP synthesis with the characteristic cellophane reflex features of the ERM. The proposed method with StyleGAN2-based augmentation outperformed the typical transfer learning without a generative adversarial network. The proposed model achieved an area under the receiver operating characteristic (AUC) curve of 0.926 for internal validation. AUCs of 0.951 and 0.914 were obtained for the two external validation datasets. Compared with the deep learning model without augmentation, StyleGAN2-based augmentation improved the detection performance and contributed to the focus on the location of the ERM. CONCLUSIONS: We proposed an ERM detection model by synthesizing realistic CFP images with the pathological features of ERM through generative deep learning. We believe that our deep learning framework will help achieve a more accurate detection of ERM in a limited data setting.


Subject(s)
Deep Learning , Epiretinal Membrane , Humans , Epiretinal Membrane/diagnostic imaging , Retrospective Studies , Diagnostic Techniques, Ophthalmological , Photography/methods
15.
Klin Monbl Augenheilkd ; 241(1): 75-83, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38242135

ABSTRACT

Cataract is among the leading causes of visual impairment worldwide. Innovations in treatment have drastically improved patient outcomes, but to be properly implemented, it is necessary to have the right diagnostic tools. This review explores the cataract grading systems developed by researchers in recent decades and provides insight into both merits and limitations. To this day, the gold standard for cataract classification is the Lens Opacity Classification System III. Different cataract features are graded according to standard photographs during slit lamp examination. Although widely used in research, its clinical application is rare, and it is limited by its subjective nature. Meanwhile, recent advancements in imaging technology, notably Scheimpflug imaging and optical coherence tomography, have opened the possibility of objective assessment of lens structure. With the use of automatic lens anatomy detection software, researchers demonstrated a good correlation to functional and surgical metrics such as visual acuity, phacoemulsification energy, and surgical time. The development of deep learning networks has further increased the capability of these grading systems by improving interpretability and increasing robustness when applied to norm-deviating cases. These classification systems, which can be used for both screening and preoperative diagnostics, are of value for targeted prospective studies, but still require implementation and validation in everyday clinical practice.


Subject(s)
Cataract , Lens, Crystalline , Phacoemulsification , Humans , Prospective Studies , Photography/methods , Cataract/diagnosis , Visual Acuity , Phacoemulsification/methods
16.
J Biomed Opt ; 29(Suppl 1): S11524, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38292055

ABSTRACT

Significance: Compressed ultrafast photography (CUP) is currently the world's fastest single-shot imaging technique. Through the integration of compressed sensing and streak imaging, CUP can capture a transient event in a single camera exposure with imaging speeds from thousands to trillions of frames per second, at micrometer-level spatial resolutions, and in broad sensing spectral ranges. Aim: This tutorial aims to provide a comprehensive review of CUP in its fundamental methods, system implementations, biomedical applications, and prospect. Approach: A step-by-step guideline to CUP's forward model and representative image reconstruction algorithms is presented with sample codes and illustrations in Matlab and Python. Then, CUP's hardware implementation is described with a focus on the representative techniques, advantages, and limitations of the three key components-the spatial encoder, the temporal shearing unit, and the two-dimensional sensor. Furthermore, four representative biomedical applications enabled by CUP are discussed, followed by the prospect of CUP's technical advancement. Conclusions: CUP has emerged as a state-of-the-art ultrafast imaging technology. Its advanced imaging ability and versatility contribute to unprecedented observations and new applications in biomedicine. CUP holds great promise in improving technical specifications and facilitating the investigation of biomedical processes.


Subject(s)
Image Processing, Computer-Assisted , Photography , Photography/methods , Image Processing, Computer-Assisted/methods , Algorithms
17.
J Invest Dermatol ; 144(6): 1200-1207, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38231164

ABSTRACT

Artificial intelligence (AI) algorithms for skin lesion classification have reported accuracy at par with and even outperformance of expert dermatologists in experimental settings. However, the majority of algorithms do not represent real-world clinical approach where skin phenotype and clinical background information are considered. We review the current state of AI for skin lesion classification and present opportunities and challenges when applied to total body photography (TBP). AI in TBP analysis presents opportunities for intrapatient assessment of skin phenotype and holistic risk assessment by incorporating patient-level metadata, although challenges exist for protecting patient privacy in algorithm development and improving explainable AI methods.


Subject(s)
Algorithms , Artificial Intelligence , Photography , Humans , Photography/methods , Skin/diagnostic imaging , Skin/pathology , Skin Diseases/diagnosis , Skin Diseases/diagnostic imaging , Whole Body Imaging/methods , Image Processing, Computer-Assisted/methods
18.
IEEE Trans Med Imaging ; 43(5): 1945-1957, 2024 May.
Article in English | MEDLINE | ID: mdl-38206778

ABSTRACT

Color fundus photography (CFP) and Optical coherence tomography (OCT) images are two of the most widely used modalities in the clinical diagnosis and management of retinal diseases. Despite the widespread use of multimodal imaging in clinical practice, few methods for automated diagnosis of eye diseases utilize correlated and complementary information from multiple modalities effectively. This paper explores how to leverage the information from CFP and OCT images to improve the automated diagnosis of retinal diseases. We propose a novel multimodal learning method, named geometric correspondence-based multimodal learning network (GeCoM-Net), to achieve the fusion of CFP and OCT images. Specifically, inspired by clinical observations, we consider the geometric correspondence between the OCT slice and the CFP region to learn the correlated features of the two modalities for robust fusion. Furthermore, we design a new feature selection strategy to extract discriminative OCT representations by automatically selecting the important feature maps from OCT slices. Unlike the existing multimodal learning methods, GeCoM-Net is the first method that formulates the geometric relationships between the OCT slice and the corresponding region of the CFP image explicitly for CFP and OCT fusion. Experiments have been conducted on a large-scale private dataset and a publicly available dataset to evaluate the effectiveness of GeCoM-Net for diagnosing diabetic macular edema (DME), impaired visual acuity (VA) and glaucoma. The empirical results show that our method outperforms the current state-of-the-art multimodal learning methods by improving the AUROC score 0.4%, 1.9% and 2.9% for DME, VA and glaucoma detection, respectively.


Subject(s)
Image Interpretation, Computer-Assisted , Multimodal Imaging , Tomography, Optical Coherence , Humans , Tomography, Optical Coherence/methods , Multimodal Imaging/methods , Image Interpretation, Computer-Assisted/methods , Algorithms , Retinal Diseases/diagnostic imaging , Retina/diagnostic imaging , Machine Learning , Photography/methods , Diagnostic Techniques, Ophthalmological , Databases, Factual
19.
Retina ; 44(6): 1034-1044, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38261816

ABSTRACT

BACKGROUND/PURPOSE: Evaluate the performance of a deep learning algorithm for the automated detection and grading of vitritis on ultrawide-field imaging. METHODS: Cross-sectional noninterventional study. Ultrawide-field fundus retinophotographs of uveitis patients were used. Vitreous haze was defined according to the six steps of the Standardization of Uveitis Nomenclature classification. The deep learning framework TensorFlow and the DenseNet121 convolutional neural network were used to perform the classification task. The best fitted model was tested in a validation study. RESULTS: One thousand one hundred eighty-one images were included. The performance of the model for the detection of vitritis was good with a sensitivity of 91%, a specificity of 89%, an accuracy of 0.90, and an area under the receiver operating characteristics curve of 0.97. When used on an external set of images, the accuracy for the detection of vitritis was 0.78. The accuracy to classify vitritis in one of the six Standardization of Uveitis Nomenclature grades was limited (0.61) but improved to 0.75 when the grades were grouped into three categories. When accepting an error of one grade, the accuracy for the six-class classification increased to 0.90, suggesting the need for a larger sample to improve the model performances. CONCLUSION: A new deep learning model based on ultrawide-field fundus imaging that produces an efficient tool for the detection of vitritis was described. The performance of the model for the grading into three categories of increasing vitritis severity was acceptable. The performance for the six-class grading of vitritis was limited but can probably be improved with a larger set of images.


Subject(s)
Deep Learning , Fundus Oculi , Humans , Cross-Sectional Studies , Female , Male , Photography/methods , Vitreous Body/pathology , Vitreous Body/diagnostic imaging , Adult , ROC Curve , Middle Aged , Eye Diseases/diagnosis , Eye Diseases/classification , Eye Diseases/diagnostic imaging , Uveitis/diagnosis , Uveitis/classification , Algorithms , Neural Networks, Computer
20.
Eye (Lond) ; 38(8): 1471-1476, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38297154

ABSTRACT

AIM: To assess the performance of smartphone based wide-field retinal imaging (WFI) versus ultra-wide-field imaging (UWFI) for assessment of sight-threatening diabetic retinopathy (STDR) as well as locating predominantly peripheral lesions (PPL) of DR. METHODS: Individuals with type 2 diabetes with varying grades of DR underwent nonmydriatic UWFI with Daytona Plus camera followed by mydriatic WFI with smartphone-based Vistaro camera at a tertiary care diabetes centre in South India in 2021-22. Grading of DR as well as identification of PPL (DR lesions beyond the posterior pole) in the retinal images of both cameras was performed by senior retina specialists. STDR was defined by the presence of severe non-proliferative DR, proliferative DR or diabetic macular oedema (DME). The sensitivity and specificity of smartphone based WFI for detection of PPL and STDR was assessed. Agreement between the graders for both cameras was compared. RESULTS: Retinal imaging was carried out in 318 eyes of 160 individuals (mean age 54.7 ± 9 years; mean duration of diabetes 16.6 ± 7.9 years). The sensitivity and specificity for detection of STDR by Vistaro camera was 92.7% (95% CI 80.1-98.5) and 96.6% (95% CI 91.5-99.1) respectively and 95.1% (95% CI 83.5-99.4) and 95.7% (95% CI 90.3-98.6) by Daytona Plus respectively. PPL were detected in 89 (27.9%) eyes by WFI by Vistaro camera and in 160 (50.3%) eyes by UWFI. However, this did not translate to any significant difference in the grading of STDR between the two imaging systems. In both devices, PPL were most common in supero-temporal quadrant (34%). The prevalence of PPL increased with increasing severity of DR with both cameras (p < 0.001). The kappa comparison between the 2 graders for varying grades of severity of DR was 0.802 (p < 0.001) for Vistaro and 0.753 (p < 0.001) for Daytona Plus camera. CONCLUSION: Mydriatic smartphone-based widefield imaging has high sensitivity and specificity for detecting STDR and can be used to screen for peripheral retinal lesions beyond the posterior pole in individuals with diabetes.


Subject(s)
Diabetic Retinopathy , Photography , Smartphone , Humans , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/diagnostic imaging , Middle Aged , Female , Male , Photography/instrumentation , Photography/methods , Diabetes Mellitus, Type 2/complications , Aged , Severity of Illness Index , Adult , India , Sensitivity and Specificity , Fundus Oculi , Fluorescein Angiography/methods , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...