Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Invest Dermatol ; 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38909840

ABSTRACT

Precise evaluation of repigmentation in vitiligo patients is crucial for monitoring treatment efficacy and enhancing patient satisfaction. This study aimed to develop a computer-aided system for assessing repigmentation rates in vitiligo patients, providing valuable insights for clinical practice. A retrospective study was conducted at the Dermatology Department of Shenzhen People's Hospital between June 2019 and November 2022. Pre- and post-treatment images of vitiligo lesions under Wood's lamp were collected, involving 833 participants stratified by sex, age, and pigmentation patterns. Our results demonstrated that the 'marginal' pigmentation pattern exhibited a higher repigmentation rate of 72% compared to the 'central non-follicular' pattern at 45%. Males had a slightly higher average repigmentation rate of 0.37 in comparison to females at 0.33. Among age groups, individuals aged 0-20 years showed the highest average repigmentation rate at 0.41, while the oldest age group (61-80 years) displayed the lowest rate at 0.25. Analysis of multiple visits identified the 'marginal' pattern as the most prevalent (60%), with a mean repigmentation rate of 40%. This study introduced a computational system for evaluating vitiligo repigmentation rates, enhancing our comprehension of patient responses, ultimately contributing to enhanced clinical care.

2.
Sci Rep ; 14(1): 11588, 2024 05 21.
Article in English | MEDLINE | ID: mdl-38773207

ABSTRACT

Current assessment methods for diabetic foot ulcers (DFUs) lack objectivity and consistency, posing a significant risk to diabetes patients, including the potential for amputations, highlighting the urgent need for improved diagnostic tools and care standards in the field. To address this issue, the objective of this study was to develop and evaluate the Smart Diabetic Foot Ulcer Scoring System, ScoreDFUNet, which incorporates artificial intelligence (AI) and image analysis techniques, aiming to enhance the precision and consistency of diabetic foot ulcer assessment. ScoreDFUNet demonstrates precise categorization of DFU images into "ulcer," "infection," "normal," and "gangrene" areas, achieving a noteworthy accuracy rate of 95.34% on the test set, with elevated levels of precision, recall, and F1 scores. Comparative evaluations with dermatologists affirm that our algorithm consistently surpasses the performance of junior and mid-level dermatologists, closely matching the assessments of senior dermatologists, and rigorous analyses including Bland-Altman plots and significance testing validate the robustness and reliability of our algorithm. This innovative AI system presents a valuable tool for healthcare professionals and can significantly improve the care standards in the field of diabetic foot ulcer assessment.


Subject(s)
Algorithms , Artificial Intelligence , Diabetic Foot , Diabetic Foot/diagnosis , Diabetic Foot/pathology , Humans , Reproducibility of Results , Image Processing, Computer-Assisted/methods , Severity of Illness Index
3.
Comput Biol Med ; 172: 108246, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38471350

ABSTRACT

Diabetic retinopathy (DR) is a severe ocular complication of diabetes that can lead to vision damage and even blindness. Currently, traditional deep convolutional neural networks (CNNs) used for DR grading tasks face two primary challenges: (1) insensitivity to minority classes due to imbalanced data distribution, and (2) neglecting the relationship between the left and right eyes by utilizing the fundus image of only one eye for training without differentiating between them. To tackle these challenges, we proposed the DRGCNN (DR Grading CNN) model. To solve the problem caused by imbalanced data distribution, our model adopts a more balanced strategy by allocating an equal number of channels to feature maps representing various DR categories. Furthermore, we introduce a CAM-EfficientNetV2-M encoder dedicated to encoding input retinal fundus images for feature vector generation. The number of parameters of our encoder is 52.88 M, which is less than RegNet_y_16gf (80.57 M) and EfficientNetB7 (63.79 M), but the corresponding kappa value is higher. Additionally, in order to take advantage of the binocular relationship, we input fundus retinal images from both eyes of the patient into the network for features fusion during the training phase. We achieved a kappa value of 86.62% on the EyePACS dataset and 86.16% on the Messidor-2 dataset. Experimental results on these representative datasets for diabetic retinopathy (DR) demonstrate the exceptional performance of our DRGCNN model, establishing it as a highly competitive intelligent classification model in the field of DR. The code is available for use at https://github.com/Fat-Hai/DRGCNN.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Humans , Diabetic Retinopathy/diagnostic imaging , Neural Networks, Computer , Fundus Oculi
4.
Med Image Anal ; 92: 103061, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38086235

ABSTRACT

The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.


Subject(s)
Diagnostic Imaging , Image Processing, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods
5.
J Med Imaging (Bellingham) ; 6(3): 034004, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31572745

ABSTRACT

A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.

6.
Comput Med Imaging Graph ; 55: 78-86, 2017 01.
Article in English | MEDLINE | ID: mdl-27665058

ABSTRACT

The automatic exudate segmentation in colour retinal fundus images is an important task in computer aided diagnosis and screening systems for diabetic retinopathy. In this paper, we present a location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images, which includes three stages: anatomic structure removal, exudate location and exudate segmentation. In anatomic structure removal stage, matched filters based main vessels segmentation method and a saliency based optic disk segmentation method are proposed. The main vessel and optic disk are then removed to eliminate the adverse affects that they bring to the second stage. In the location stage, we learn a random forest classifier to classify patches into two classes: exudate patches and exudate-free patches, in which the histograms of completed local binary patterns are extracted to describe the texture structures of the patches. Finally, the local variance, the size prior about the exudate regions and the local contrast prior are used to segment the exudate regions out from patches which are classified as exudate patches in the location stage. We evaluate our method both at exudate-level and image-level. For exudate-level evaluation, we test our method on e-ophtha EX dataset, which provides pixel level annotation from the specialists. The experimental results show that our method achieves 76% in sensitivity and 75% in positive prediction value (PPV), which both outperform the state of the art methods significantly. For image-level evaluation, we test our method on DiaRetDB1, and achieve competitive performance compared to the state of the art methods.


Subject(s)
Color , Diabetic Retinopathy/diagnostic imaging , Exudates and Transudates/diagnostic imaging , Fundus Oculi , Image Interpretation, Computer-Assisted/methods , Optic Disk/diagnostic imaging , Retinal Vessels/diagnostic imaging , Algorithms , Humans , Pattern Recognition, Automated/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...