Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 5.234
Filtrar
1.
Transl Vis Sci Technol ; 13(5): 23, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38809531

RESUMO

Purpose: To develop convolutional neural network (CNN)-based models for predicting the axial length (AL) using color fundus photography (CFP) and explore associated clinical and structural characteristics. Methods: This study enrolled 1105 fundus images from 467 participants with ALs ranging from 19.91 to 32.59 mm, obtained at National Taiwan University Hospital between 2020 and 2021. The AL measurements obtained from a scanning laser interferometer served as the gold standard. The accuracy of prediction was compared among CNN-based models with different inputs, including CFP, age, and/or sex. Heatmaps were interpreted by integrated gradients. Results: Using age, sex, and CFP as input, the mean ± standard deviation absolute error (MAE) for AL prediction by the model was 0.771 ± 0.128 mm, outperforming models that used age and sex alone (1.263 ± 0.115 mm; P < 0.001) and CFP alone (0.831 ± 0.216 mm; P = 0.016) by 39.0% and 7.31%, respectively. The removal of relatively poor-quality CFPs resulted in a slight MAE reduction to 0.759 ± 0.120 mm without statistical significance (P = 0.24). The inclusion of age and CFP improved prediction accuracy by 5.59% (P = 0.043), while adding sex had no significant improvement (P = 0.41). The optic disc and temporal peripapillary area were highlighted as the focused areas on the heatmaps. Conclusions: Deep learning-based prediction of AL using CFP was fairly accurate and enhanced by age inclusion. The optic disc and temporal peripapillary area may contain crucial structural information for AL prediction in CFP. Translational Relevance: This study might aid AL assessments and the understanding of the morphologic characteristics of the fundus related to AL.


Assuntos
Comprimento Axial do Olho , Redes Neurais de Computação , Fotografação , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Adulto , Fotografação/métodos , Idoso , Comprimento Axial do Olho/diagnóstico por imagem , Fundo de Olho , Adulto Jovem , Idoso de 80 Anos ou mais
2.
Transl Vis Sci Technol ; 13(5): 20, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38780955

RESUMO

Purpose: We sough to develop an automatic method of quantifying optic disc pallor in fundus photographs and determine associations with peripapillary retinal nerve fiber layer (pRNFL) thickness. Methods: We used deep learning to segment the optic disc, fovea, and vessels in fundus photographs, and measured pallor. We assessed the relationship between pallor and pRNFL thickness derived from optical coherence tomography scans in 118 participants. Separately, we used images diagnosed by clinical inspection as pale (n = 45) and assessed how measurements compared with healthy controls (n = 46). We also developed automatic rejection thresholds and tested the software for robustness to camera type, image format, and resolution. Results: We developed software that automatically quantified disc pallor across several zones in fundus photographs. Pallor was associated with pRNFL thickness globally (ß = -9.81; standard error [SE] = 3.16; P < 0.05), in the temporal inferior zone (ß = -29.78; SE = 8.32; P < 0.01), with the nasal/temporal ratio (ß = 0.88; SE = 0.34; P < 0.05), and in the whole disc (ß = -8.22; SE = 2.92; P < 0.05). Furthermore, pallor was significantly higher in the patient group. Last, we demonstrate the analysis to be robust to camera type, image format, and resolution. Conclusions: We developed software that automatically locates and quantifies disc pallor in fundus photographs and found associations between pallor measurements and pRNFL thickness. Translational Relevance: We think our method will be useful for the identification, monitoring, and progression of diseases characterized by disc pallor and optic atrophy, including glaucoma, compression, and potentially in neurodegenerative disorders.


Assuntos
Aprendizado Profundo , Fibras Nervosas , Disco Óptico , Fotografação , Software , Tomografia de Coerência Óptica , Humanos , Disco Óptico/diagnóstico por imagem , Disco Óptico/patologia , Tomografia de Coerência Óptica/métodos , Masculino , Feminino , Pessoa de Meia-Idade , Fibras Nervosas/patologia , Fotografação/métodos , Adulto , Células Ganglionares da Retina/patologia , Células Ganglionares da Retina/citologia , Idoso , Doenças do Nervo Óptico/diagnóstico por imagem , Doenças do Nervo Óptico/diagnóstico , Doenças do Nervo Óptico/patologia , Fundo de Olho
3.
Sci Rep ; 14(1): 12304, 2024 05 29.
Artigo em Inglês | MEDLINE | ID: mdl-38811714

RESUMO

Recent advances in artificial intelligence (AI) enable the generation of realistic facial images that can be used in police lineups. The use of AI image generation offers pragmatic advantages in that it allows practitioners to generate filler images directly from the description of the culprit using text-to-image generation, avoids the violation of identity rights of natural persons who are not suspects and eliminates the constraints of being bound to a database with a limited set of photographs. However, the risk exists that using AI-generated filler images provokes more biased selection of the suspect if eyewitnesses are able to distinguish AI-generated filler images from the photograph of the suspect's face. Using a model-based analysis, we compared biased suspect selection directly between lineups with AI-generated filler images and lineups with database-derived filler photographs. The results show that the lineups with AI-generated filler images were perfectly fair and, in fact, led to less biased suspect selection than the lineups with database-derived filler photographs used in previous experiments. These results are encouraging with regard to the potential of AI image generation for constructing fair lineups which should inspire more systematic research on the feasibility of adopting AI technology in forensic settings.


Assuntos
Inteligência Artificial , Face , Humanos , Processamento de Imagem Assistida por Computador/métodos , Fotografação/métodos , Polícia , Bases de Dados Factuais , Ciências Forenses/métodos , Feminino , Crime
4.
J Craniofac Surg ; 35(4): e376-e380, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38722365

RESUMO

OBJECTIVE: Orthognathic surgery is a viable and reproducible treatment for facial deformities. Despite the precision of the skeletal planning of surgical procedures, there is little information about the relations between hard and soft tissues in three-dimensional (3D) analysis, resulting in unpredictable soft tissue outcomes. Three-dimensional photography is a viable tool for soft tissue analysis because it is easy to use, has wide availability, low cost, and is harmless. This review aims to establish parameters for acquiring consistent and reproducible 3D facial images. METHODS: A scoping review was conducted across PubMed, SCOPUS, Scientific Electronic Library Online (SciELO), and Web of Science databases, adhering to "Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews" guidelines. Articles presenting 3D facial photographs in the diagnostic phase were considered. RESULTS: A total of 79 articles were identified, of which 29 were selected for analysis. CONCLUSION: The predominant use of automated systems like 3dMD and VECTRA M3 was noted. User positioning has highest agreement among authors. Noteworthy aspects include the importance of proper lighting, facial expression, and dental positioning, with observed discrepancies and inconsistencies among authors. Finally, the authors proposed a 3D image acquisition protocol based on this research findings.


Assuntos
Face , Imageamento Tridimensional , Fotografação , Humanos , Imageamento Tridimensional/métodos , Face/diagnóstico por imagem , Face/anatomia & histologia , Fotografação/métodos , Procedimentos Cirúrgicos Ortognáticos/métodos , Reprodutibilidade dos Testes
5.
Sensors (Basel) ; 24(9)2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38732872

RESUMO

This paper presents an experimental evaluation of a wearable light-emitting diode (LED) transmitter in an optical camera communications (OCC) system. The evaluation is conducted under conditions of controlled user movement during indoor physical exercise, encompassing both mild and intense exercise scenarios. We introduce an image processing algorithm designed to identify a template signal transmitted by the LED and detected within the image. To enhance this process, we utilize the dynamics of controlled exercise-induced motion to limit the tracking process to a smaller region within the image. We demonstrate the feasibility of detecting the transmitting source within the frames, and thus limit the tracking process to a smaller region within the image, achieving an reduction of 87.3% for mild exercise and 79.0% for intense exercise.


Assuntos
Algoritmos , Exercício Físico , Dispositivos Eletrônicos Vestíveis , Humanos , Exercício Físico/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Fotografação/instrumentação , Fotografação/métodos , Atenção à Saúde
6.
J Vis ; 24(5): 1, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38691088

RESUMO

Still life paintings comprise a wealth of data on visual perception. Prior work has shown that the color statistics of objects show a marked bias for warm colors. Here, we ask about the relative chromatic contrast of these object-associated colors compared with background colors in still life paintings. We reasoned that, owing to the memory color effect, where the color of familiar objects is perceived more saturated, warm colors will be relatively more saturated than cool colors in still life paintings as compared with photographs. We analyzed color in 108 slides of still life paintings of fruit from the teaching slide collection of the Fogg University Art Museum and 41 color-calibrated photographs of fruit from the McGill data set. The results show that the relatively higher chromatic contrast of warm colors was greater for paintings compared with photographs, consistent with the hypothesis.


Assuntos
Percepção de Cores , Frutas , Pinturas , Fotografação , Humanos , Percepção de Cores/fisiologia , Fotografação/métodos , Cor , Sensibilidades de Contraste/fisiologia
7.
Ecology ; 105(6): e4299, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38650359

RESUMO

Information on tropical Asian vertebrates has traditionally been sparse, particularly when it comes to cryptic species inhabiting the dense forests of the region. Vertebrate populations are declining globally due to land-use change and hunting, the latter frequently referred as "defaunation." This is especially true in tropical Asia where there is extensive land-use change and high human densities. Robust monitoring requires that large volumes of vertebrate population data be made available for use by the scientific and applied communities. Camera traps have emerged as an effective, non-invasive, widespread, and common approach to surveying vertebrates in their natural habitats. However, camera-derived datasets remain scattered across a wide array of sources, including published scientific literature, gray literature, and unpublished works, making it challenging for researchers to harness the full potential of cameras for ecology, conservation, and management. In response, we collated and standardized observations from 239 camera trap studies conducted in tropical Asia. There were 278,260 independent records of 371 distinct species, comprising 232 mammals, 132 birds, and seven reptiles. The total trapping effort accumulated in this data paper consisted of 876,606 trap nights, distributed among Indonesia, Singapore, Malaysia, Bhutan, Thailand, Myanmar, Cambodia, Laos, Vietnam, Nepal, and far eastern India. The relatively standardized deployment methods in the region provide a consistent, reliable, and rich count data set relative to other large-scale pressence-only data sets, such as the Global Biodiversity Information Facility (GBIF) or citizen science repositories (e.g., iNaturalist), and is thus most similar to eBird. To facilitate the use of these data, we also provide mammalian species trait information and 13 environmental covariates calculated at three spatial scales around the camera survey centroids (within 10-, 20-, and 30-km buffers). We will update the dataset to include broader coverage of temperate Asia and add newer surveys and covariates as they become available. This dataset unlocks immense opportunities for single-species ecological or conservation studies as well as applied ecology, community ecology, and macroecology investigations. The data are fully available to the public for utilization and research. Please cite this data paper when utilizing the data.


Assuntos
Florestas , Clima Tropical , Vertebrados , Animais , Vertebrados/fisiologia , Fotografação/métodos , Ásia , Biodiversidade
8.
Meat Sci ; 213: 109500, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38582006

RESUMO

The objective of this study was to develop calibration models against rib eye traits and independently validate the precision, accuracy, and repeatability of the Frontmatec Q-FOM™ Beef grading camera in Australian carcasses. This study compiled 12 different research datasets acquired from commercial processing facilities and were comprised of a diverse range of carcass phenotypes, graded by industry identified expert Meat Standards Australia (MSA) graders and sampled for chemical intramuscular fat (IMF%). Calibration performance was maintained when the device was independently validated. For continuous traits, the Q-FOM™ demonstrated precise (root mean squared error of prediction, RMSEP) and accurate (coefficient of determination, R2) prediction of eye muscle area (EMA) (R2 = 0.89, RMSEP = 4.3 cm2, slope = 0.96, bias = 0.7), MSA marbling (R2 = 0.95, RMSEP = 47.2, slope = 0.98, bias = -12.8) and chemical IMF% (R2 = 0.94, RMSEP = 1.56%, slope = 0.96, bias = 0.64). For categorical traits, the Q-FOM™ predicted 61%, 64.3% and 60.8% of AUS-MEAT marbling, meat colour and fat colour scores equivalent, and 95% within ±1 classes of expert grader scores. The Q-FOM™ also demonstrated very high repeatability and reproducibility across all traits.


Assuntos
Tecido Adiposo , Cor , Músculo Esquelético , Fotografação , Carne Vermelha , Animais , Austrália , Bovinos , Carne Vermelha/análise , Carne Vermelha/normas , Fotografação/métodos , Calibragem , Fenótipo , Reprodutibilidade dos Testes , Costelas
9.
Eye (Lond) ; 38(9): 1694-1701, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38467864

RESUMO

BACKGROUND: Diabetic Retinopathy (DR) is a leading cause of blindness worldwide, affecting people with diabetes. The timely diagnosis and treatment of DR are essential in preventing vision loss. Non-mydriatic fundus cameras and artificial intelligence (AI) software have been shown to improve DR screening efficiency. However, few studies have compared the diagnostic performance of different non-mydriatic cameras and AI software. METHODS: This clinical study was conducted at the endocrinology clinic of Akdeniz University with 900 volunteer patients that were previously diagnosed with diabetes but not with diabetic retinopathy. Fundus images of each patient were taken using three non-mydriatic fundus cameras and EyeCheckup AI software was used to diagnose more than mild diabetic retinopathy, vision-threatening diabetic retinopathy, and clinically significant diabetic macular oedema using images from all three cameras. Then patients underwent dilation and 4 wide-field fundus photography. Three retina specialists graded the 4 wide-field fundus images according to the diabetic retinopathy treatment preferred practice patterns of the American Academy of Ophthalmology. The study was pre-registered on clinicaltrials.gov with the ClinicalTrials.gov Identifier: NCT04805541. RESULTS: The Canon CR2 AF AF camera had a sensitivity and specificity of 95.65% / 95.92% for diagnosing more than mild DR, the Topcon TRC-NW400 had 95.19% / 96.46%, and the Optomed Aurora had 90.48% / 97.21%. For vision threatening diabetic retinopathy, the Canon CR2 AF had a sensitivity and specificity of 96.00% / 96.34%, the Topcon TRC-NW400 had 98.52% / 95.93%, and the Optomed Aurora had 95.12% / 98.82%. For clinically significant diabetic macular oedema, the Canon CR2 AF had a sensitivity and specificity of 95.83% / 96.83%, the Topcon TRC-NW400 had 98.50% / 96.52%, and the Optomed Aurora had 94.93% / 98.95%. CONCLUSION: The study demonstrates the potential of using non-mydriatic fundus cameras combined with artificial intelligence software in detecting diabetic retinopathy. Several cameras were tested and, notably, each camera exhibited varying but adequate levels of sensitivity and specificity. The Canon CR2 AF emerged with the highest accuracy in identifying both more than mild diabetic retinopathy and vision-threatening cases, while the Topcon TRC-NW400 excelled in detecting clinically significant diabetic macular oedema. The findings from this study emphasize the importance of considering a non mydriatic camera and artificial intelligence software for diabetic retinopathy screening. However, further research is imperative to explore additional factors influencing the efficiency of diabetic retinopathy screening using AI and non mydriatic cameras such as costs involved and effects of screening using and on an ethnically diverse population.


Assuntos
Inteligência Artificial , Retinopatia Diabética , Fotografação , Sensibilidade e Especificidade , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Retinopatia Diabética/diagnóstico , Fotografação/métodos , Reprodutibilidade dos Testes
10.
Photodiagnosis Photodyn Ther ; 46: 104043, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38460655

RESUMO

PURPOSE: To evaluate the use of the Pentacam to analyse the presence or absence of fluid pockets under the anterior capsule and their significance in terms of surgical management and prevention of complications. SETTINGS: Abant Izzet Baysal University Hospital, Bolu, Turkey DESIGN: Randomized, masked, prospective design METHODS: 60 patients with mature cataracts underwent standard phacoemulsification (Phaco) and intraocular lens (IOL) implantation. Patients were divided into 3 groups. Group 1 underwent Phaco+IOL implantation without imaging by Pentacam. Group 2 had fluid detected in Pentacam imaging before the operation and underwent Phaco+IOL implantation with Brazilian method. Group 3 had no fluid detected in Pentacam imaging before the operation and underwent standart Phaco+IOL implantation operation. RESULTS: When the complication rates of 3 different groups were examined separately, they were found to be 15 % in group 1; 5 % in group 2 and 5 % in group 3, respectively. When compared in pairs as Group 1-2, 1-3, and 2-3, respectively (p < 0.01), (p < 0.01), (p > 0.05). The nuclear density of Group 2 and Group 3 was measured, resulting in 30.2 % and 29.6 %, respectively (P = 0.614). Lens thickness, patients with fluid (+) had a thickness of 5.35 mm, while patients with fluid (-) had a thickness of 3.96 mm (p < 0.05). CONCLUSION: Patients who are not imaged with pentacam before surgery experience more complications than other groups because the presence of fluid is unknown. Central lens thickness was higher in patients with fluid, and there was no significant difference in nuclear density between the groups with and without fluid. Pentacam can show the presence of supcapsular fluid and we recommend that imaging tools be more widely used in cataract surgery. We think that this will enable surgeons to make a more accurate surgical planning and reduce the risk of complications.


Assuntos
Catarata , Facoemulsificação , Humanos , Feminino , Masculino , Estudos Prospectivos , Idoso , Pessoa de Meia-Idade , Facoemulsificação/métodos , Implante de Lente Intraocular , Cuidados Pré-Operatórios/métodos , Fotografação/métodos
12.
Ann Plast Surg ; 92(4): 367-372, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38527337

RESUMO

STATEMENT OF THE PROBLEM: Standardized medical photography of the face is a vital part of patient documentation, clinical evaluation, and scholarly dissemination. Because digital photography is a mainstay in clinical care, there is a critical need for an easy-to-use mobile device application that could assist users in taking a standardized clinical photograph. ImageAssist was developed to answer this need. The mobile application is integrated into the electronic medical record (EMR); it implements and automates American Society of Plastic Surgery/Plastic Surgery Research Foundation photographic guidelines with background deletion. INITIAL PRODUCT DEVELOPMENT: A team consisting of a craniofacial plastic surgeon and the Health Information Technology product group developed and implemented the pilot application of ImageAssist. The application launches directly from patients' chart in the mobile version of the EMR, EPIC Haiku (Verona, Wisconsin). Standard views of the face (90-degree, oblique left and right, front and basal view) were built into digital templates and are user selected. Red digital frames overlay the patients' face on the screen and turn green once standardized alignment is achieved, prompting the user to capture. The background is then digitally subtracted to a standard blue, and the photograph is not stored on the user's phone. EARLY USER EXPERIENCE: ImageAssist initial beta user group was limited to 13 providers across dermatology, ENT, and plastic surgery. A mix of physicians, advanced practice providers, and nurses was included to pilot the application in the outpatient clinic setting using Image Assist on their smart phone. After using the app, an internal survey was used to gain feedback on the user experience. In the first 2 years of use, 31 users have taken more than 3400 photographs in more than 800 clinical encounters. Since initial release, automated background deletion also has been functional for any anatomic area. CONCLUSIONS: ImageAssist is a novel smartphone application that standardizes clinical photography and integrated into the EMR, which could save both time and expense for clinicians seeking to take consistent clinical images. Future steps include continued refinement of current image capture functionality and development of a stand-alone mobile device application.


Assuntos
Aplicativos Móveis , Procedimentos de Cirurgia Plástica , Cirurgia Plástica , Humanos , Estados Unidos , Smartphone , Fotografação/métodos
13.
Biomed Eng Online ; 23(1): 32, 2024 Mar 12.
Artigo em Inglês | MEDLINE | ID: mdl-38475784

RESUMO

PURPOSE: This study aimed to investigate the imaging repeatability of self-service fundus photography compared to traditional fundus photography performed by experienced operators. DESIGN: Prospective cross-sectional study. METHODS: In a community-based eye diseases screening site, we recruited 65 eyes (65 participants) from the resident population of Shanghai, China. All participants were devoid of cataract or any other conditions that could potentially compromise the quality of fundus imaging. Participants were categorized into fully self-service fundus photography or traditional fundus photography group. Image quantitative analysis software was used to extract clinically relevant indicators from the fundus images. Finally, a statistical analysis was performed to depict the imaging repeatability of fully self-service fundus photography. RESULTS: There was no statistical difference in the absolute differences, or the extents of variation of the indicators between the two groups. The extents of variation of all the measurement indicators, with the exception of the optic cup area, were below 10% in both groups. The Bland-Altman plots and multivariate analysis results were consistent with results mentioned above. CONCLUSIONS: The image repeatability of fully self-service fundus photography is comparable to that of traditional fundus photography performed by professionals, demonstrating promise in large-scale eye disease screening programs.


Assuntos
Serviços de Saúde Comunitária , Glaucoma , Humanos , Estudos Transversais , Estudos Prospectivos , China , Fotografação/métodos , Fundo de Olho
14.
Behav Res Methods ; 56(4): 3861-3872, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38332413

RESUMO

Over the last 40 years, object recognition studies have moved from using simple line drawings, to more detailed illustrations, to more ecologically valid photographic representations. Researchers now have access to various stimuli sets, however, existing sets lack the ability to independently manipulate item format, as the concepts depicted are unique to the set they derive from. To enable such comparisons, Rossion and Pourtois (2004) revisited Snodgrass and Vanderwart's (1980) line drawings and digitally re-drew the objects, adding texture and shading. In the current study, we took this further and created a set of stimuli that showcase the same objects in photographic form. We selected six photographs of each object (three color/three grayscale) and collected normative data and RTs. Naming accuracy and agreement was high for all photographs and appeared to steadily increase with format distinctiveness. In contrast to previous data patterns for drawings, naming agreement (H values) did not differ between grey and color photographs, nor did familiarity ratings. However, grey photographs received significantly lower mental imagery agreement and visual complexity scores than color photographs. This suggests that, in comparison to drawings, the ecological nature of photographs may facilitate deeper critical evaluation of whether they offer a good match to a mental representation. Color may therefore play a more vital role in photographs than in drawings, aiding participants in judging the match with their mental representation. This new photographic stimulus set and corresponding normative data provide valuable materials for a wide range of experimental studies of object recognition.


Assuntos
Reconhecimento Visual de Modelos , Estimulação Luminosa , Fotografação , Reconhecimento Psicológico , Humanos , Masculino , Feminino , Fotografação/métodos , Reconhecimento Psicológico/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Adulto , Tempo de Reação/fisiologia , Adulto Jovem , Adolescente
15.
Burns ; 50(4): 966-979, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38331663

RESUMO

AIM: This study was conducted to determine the segmentation, classification, object detection, and accuracy of skin burn images using artificial intelligence and a mobile application. With this study, individuals were able to determine the degree of burns and see how to intervene through the mobile application. METHODS: This research was conducted between 26.10.2021-01.09.2023. In this study, the dataset was handled in two stages. In the first stage, the open-access dataset was taken from https://universe.roboflow.com/, and the burn images dataset was created. In the second stage, in order to determine the accuracy of the developed system and artificial intelligence model, the patients admitted to the hospital were identified with our own design Burn Wound Detection Android application. RESULTS: In our study, YOLO V7 architecture was used for segmentation, classification, and object detection. There are 21018 data in this study, and 80% of them are used as training data, and 20% of them are used as test data. The YOLO V7 model achieved a success rate of 75.12% on the test data. The Burn Wound Detection Android mobile application that we developed in the study was used to accurately detect images of individuals. CONCLUSION: In this study, skin burn images were segmented, classified, object detected, and a mobile application was developed using artificial intelligence. First aid is crucial in burn cases, and it is an important development for public health that people living in the periphery can quickly determine the degree of burn through the mobile application and provide first aid according to the instructions of the mobile application.


Assuntos
Inteligência Artificial , Queimaduras , Aplicativos Móveis , Queimaduras/classificação , Queimaduras/diagnóstico por imagem , Queimaduras/patologia , Humanos , Fotografação/métodos
16.
Retina ; 44(6): 1092-1099, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38320305

RESUMO

PURPOSE: To observe the diagnostic value of multispectral fundus imaging (MSI) in hypertensive retinopathy (HR). METHODS: A total of 100 patients with HR were enrolled in this cross-sectional study, and all participants received fundus photography and MSI. Participants with severe HR received fundus fluorescein angiography (FFA). The diagnostic consistency between fundus photography and MSI in the diagnosis of HR was calculated. The sensitivity of MSI in the diagnosis of severe HR was calculated by comparison with FFA. Choroidal vascular index was calculated in patients with HR using MSI at 780 nm. RESULTS: MSI and fundus photography were highly concordant in the diagnosis of HR with a Kappa value = 0.883. MSI had a sensitivity of 96% in diagnosing retinal hemorrhage, a sensitivity of 89.47% in diagnosing retinal exudation, a sensitivity of 100% in diagnosing vascular compression indentation, and a sensitivity of 96.15% in diagnosing retinal arteriosclerosis. The choroidal vascular index of the patients in the HR group was significantly lower than that of the control group, whereas there was no significant difference between the affected and fellow eyes. CONCLUSION: As a noninvasive modality of observation, MSI may be a new tool for the diagnosis and assessment of HR.


Assuntos
Angiofluoresceinografia , Fundo de Olho , Retinopatia Hipertensiva , Humanos , Estudos Transversais , Feminino , Masculino , Pessoa de Meia-Idade , Angiofluoresceinografia/métodos , Retinopatia Hipertensiva/diagnóstico , Idoso , Adulto , Fotografação/métodos , Vasos Retinianos/diagnóstico por imagem , Vasos Retinianos/patologia
17.
Ophthalmic Surg Lasers Imaging Retina ; 55(5): 263-269, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38408222

RESUMO

BACKGROUND AND OBJECTIVE: Color fundus photography is an important imaging modality that is currently limited by a narrow dynamic range. We describe a post-image processing technique to generate high dynamic range (HDR) retinal images with enhanced detail. PATIENTS AND METHODS: This was a retrospective, observational case series evaluating fundus photographs of patients with macular pathology. Photographs were acquired with three or more exposure values using a commercially available camera (Topcon 50-DX). Images were aligned and imported into HDR processing software (Photomatix Pro). Fundus detail was compared between HDR and raw photographs. RESULTS: Sixteen eyes from 10 patients (5 male, 5 female; mean age 59.4 years) were analyzed. Clinician graders preferred the HDR image 91.7% of the time (44/48 image comparisons), with good grader agreement (81.3%, 13/16 eyes). CONCLUSIONS: HDR fundus imaging is feasible using images from existing fundus cameras and may be useful for enhanced visualization of retinal detail in a variety of pathologic states. [Ophthalmic Surg Lasers Imaging Retina 2024;55:263-269.].


Assuntos
Fundo de Olho , Fotografação , Humanos , Feminino , Estudos Retrospectivos , Masculino , Pessoa de Meia-Idade , Fotografação/métodos , Idoso , Doenças Retinianas/diagnóstico , Processamento de Imagem Assistida por Computador/métodos , Adulto , Retina/diagnóstico por imagem , Retina/patologia , Técnicas de Diagnóstico Oftalmológico
18.
Int Ophthalmol ; 44(1): 41, 2024 Feb 09.
Artigo em Inglês | MEDLINE | ID: mdl-38334896

RESUMO

Diabetic retinopathy (DR) is the leading global cause of vision loss, accounting for 4.8% of global blindness cases as estimated by the World Health Organization (WHO). Fundus photography is crucial in ophthalmology as a diagnostic tool for capturing retinal images. However, resource and infrastructure constraints limit access to traditional tabletop fundus cameras in developing countries. Additionally, these conventional cameras are expensive, bulky, and not easily transportable. In contrast, the newer generation of handheld and smartphone-based fundus cameras offers portability, user-friendliness, and affordability. Despite their potential, there is a lack of comprehensive review studies examining the clinical utilities of these handheld (e.g. Zeiss Visuscout 100, Volk Pictor Plus, Volk Pictor Prestige, Remidio NMFOP, FC161) and smartphone-based (e.g. D-EYE, iExaminer, Peek Retina, Volk iNview, Volk Vistaview, oDocs visoScope, oDocs Nun, oDocs Nun IR) fundus cameras. This review study aims to evaluate the feasibility and practicality of these available handheld and smartphone-based cameras in medical settings, emphasizing their advantages over traditional tabletop fundus cameras. By highlighting various clinical settings and use scenarios, this review aims to fill this gap by evaluating the efficiency, feasibility, cost-effectiveness, and remote capabilities of handheld and smartphone fundus cameras, ultimately enhancing the accessibility of ophthalmic services.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Oftalmopatias , Humanos , Retinopatia Diabética/diagnóstico , Smartphone , Fundo de Olho , Retina , Oftalmopatias/diagnóstico , Fotografação/métodos , Cegueira
19.
J Invest Dermatol ; 144(6): 1200-1207, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38231164

RESUMO

Artificial intelligence (AI) algorithms for skin lesion classification have reported accuracy at par with and even outperformance of expert dermatologists in experimental settings. However, the majority of algorithms do not represent real-world clinical approach where skin phenotype and clinical background information are considered. We review the current state of AI for skin lesion classification and present opportunities and challenges when applied to total body photography (TBP). AI in TBP analysis presents opportunities for intrapatient assessment of skin phenotype and holistic risk assessment by incorporating patient-level metadata, although challenges exist for protecting patient privacy in algorithm development and improving explainable AI methods.


Assuntos
Algoritmos , Inteligência Artificial , Fotografação , Humanos , Fotografação/métodos , Pele/diagnóstico por imagem , Pele/patologia , Dermatopatias/diagnóstico , Dermatopatias/diagnóstico por imagem , Imagem Corporal Total/métodos , Processamento de Imagem Assistida por Computador/métodos
20.
Eye (Lond) ; 38(8): 1471-1476, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38297154

RESUMO

AIM: To assess the performance of smartphone based wide-field retinal imaging (WFI) versus ultra-wide-field imaging (UWFI) for assessment of sight-threatening diabetic retinopathy (STDR) as well as locating predominantly peripheral lesions (PPL) of DR. METHODS: Individuals with type 2 diabetes with varying grades of DR underwent nonmydriatic UWFI with Daytona Plus camera followed by mydriatic WFI with smartphone-based Vistaro camera at a tertiary care diabetes centre in South India in 2021-22. Grading of DR as well as identification of PPL (DR lesions beyond the posterior pole) in the retinal images of both cameras was performed by senior retina specialists. STDR was defined by the presence of severe non-proliferative DR, proliferative DR or diabetic macular oedema (DME). The sensitivity and specificity of smartphone based WFI for detection of PPL and STDR was assessed. Agreement between the graders for both cameras was compared. RESULTS: Retinal imaging was carried out in 318 eyes of 160 individuals (mean age 54.7 ± 9 years; mean duration of diabetes 16.6 ± 7.9 years). The sensitivity and specificity for detection of STDR by Vistaro camera was 92.7% (95% CI 80.1-98.5) and 96.6% (95% CI 91.5-99.1) respectively and 95.1% (95% CI 83.5-99.4) and 95.7% (95% CI 90.3-98.6) by Daytona Plus respectively. PPL were detected in 89 (27.9%) eyes by WFI by Vistaro camera and in 160 (50.3%) eyes by UWFI. However, this did not translate to any significant difference in the grading of STDR between the two imaging systems. In both devices, PPL were most common in supero-temporal quadrant (34%). The prevalence of PPL increased with increasing severity of DR with both cameras (p < 0.001). The kappa comparison between the 2 graders for varying grades of severity of DR was 0.802 (p < 0.001) for Vistaro and 0.753 (p < 0.001) for Daytona Plus camera. CONCLUSION: Mydriatic smartphone-based widefield imaging has high sensitivity and specificity for detecting STDR and can be used to screen for peripheral retinal lesions beyond the posterior pole in individuals with diabetes.


Assuntos
Retinopatia Diabética , Fotografação , Smartphone , Humanos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/diagnóstico por imagem , Pessoa de Meia-Idade , Feminino , Masculino , Fotografação/instrumentação , Fotografação/métodos , Diabetes Mellitus Tipo 2/complicações , Idoso , Índice de Gravidade de Doença , Adulto , Índia , Sensibilidade e Especificidade , Fundo de Olho , Angiofluoresceinografia/métodos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...