Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37379193

ABSTRACT

Deep metric learning (DML) has been widely applied in various tasks (e.g., medical diagnosis and face recognition) due to the effective extraction of discriminant features via reducing data overlapping. However, in practice, these tasks also easily suffer from two class-imbalance learning (CIL) problems: data scarcity and data density, causing misclassification. Existing DML losses rarely consider these two issues, while CIL losses cannot reduce data overlapping and data density. In fact, it is a great challenge for a loss function to mitigate the impact of these three issues simultaneously, which is the objective of our proposed intraclass diversity and interclass distillation (IDID) loss with adaptive weight in this article. IDID-loss generates diverse features within classes regardless of the class sample size (to alleviate the issues of data scarcity and data density) and simultaneously preserves the semantic correlations between classes using learnable similarity when pushing different classes away from each other (to reduce overlapping). In summary, our IDID-loss provides three advantages: 1) it can simultaneously mitigate all the three issues while DML and CIL losses cannot; 2) it generates more diverse and discriminant feature representations with higher generalization ability, compared with DML losses; and 3) it provides a larger improvement on the classes of data scarcity and density with a smaller sacrifice on easy class accuracy, compared with CIL losses. Experimental results on seven public real-world datasets show that our IDID-loss achieves the best performances in terms of G-mean, F1-score, and accuracy when compared with both state-of-the-art (SOTA) DML and CIL losses. In addition, it gets rid of the time-consuming fine-tuning process over the hyperparameters of loss function.

2.
IEEE J Biomed Health Inform ; 27(8): 3970-3981, 2023 08.
Article in English | MEDLINE | ID: mdl-37220034

ABSTRACT

Pixel-level annotations are extremely expensive for medical image segmentation tasks as both expertise and time are needed to generate accurate annotations. Semi-supervised learning (SSL) for medical image segmentation has recently attracted growing attention because it can alleviate the exhausting manual annotations for clinicians by leveraging unlabeled data. However, most of the existing SSL methods do not take pixel-level information (e.g., pixel-level features) of labeled data into account, i.e., the labeled data are underutilized. Hence, in this work, an innovative Coarse-Refined Network with pixel-wise Intra-patch ranked loss and patch-wise Inter-patch ranked loss (CRII-Net) is proposed. It provides three advantages: i) it can produce stable targets for unlabeled data, as a simple yet effective coarse-refined consistency constraint is designed; ii) it is very effective for the extreme case where very scarce labeled data are available, as the pixel-level and patch-level features are extracted by our CRII-Net; and iii) it can output fine-grained segmentation results for hard regions (e.g., blurred object boundaries and low-contrast lesions), as the proposed Intra-Patch Ranked Loss (Intra-PRL) focuses on object boundaries and Inter-Patch Ranked loss (Inter-PRL) mitigates the adverse impact of low-contrast lesions. Experimental results on two common SSL tasks for medical image segmentation demonstrate the superiority of our CRII-Net. Specifically, when there are only 4% labeled data, our CRII-Net improves the Dice similarity coefficient (DSC) score by at least 7.49% when compared to five classical or state-of-the-art (SOTA) SSL methods. For hard samples/regions, our CRII-Net also significantly outperforms other compared methods in both quantitative and visualization results.


Subject(s)
Image Processing, Computer-Assisted , Supervised Machine Learning , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...