Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Med Image Anal ; 97: 103261, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-39018722

ABSTRACT

State-of-the-art deep learning models often fail to generalize in the presence of distribution shifts between training (source) data and test (target) data. Domain adaptation methods are designed to address this issue using labeled samples (supervised domain adaptation) or unlabeled samples (unsupervised domain adaptation). Active learning is a method to select informative samples to obtain maximum performance from minimum annotations. Selecting informative target domain samples can improve model performance and robustness, and reduce data demands. This paper proposes a novel pipeline called ALFREDO (Active Learning with FeatuRe disEntangelement and DOmain adaptation) that performs active learning under domain shift. We propose a novel feature disentanglement approach to decompose image features into domain specific and task specific components. Domain specific components refer to those features that provide source specific information, e.g., scanners, vendors or hospitals. Task specific components are discriminative features for classification, segmentation or other tasks. Thereafter we define multiple novel cost functions that identify informative samples under domain shift. We test our proposed method for medical image classification using one histopathology dataset and two chest X-ray datasets. Experiments show our method achieves state-of-the-art results compared to other domain adaptation methods, as well as state of the art active domain adaptation methods.

2.
Sci Total Environ ; 924: 171556, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38458450

ABSTRACT

The significant increase in hazardous waste generation in Australia has led to the discussion over the incorporation of artificial intelligence into the hazardous waste management system. Recent studies explored the potential applications of artificial intelligence in various processes of managing waste. However, no study has examined the use of text mining in the hazardous waste management sector for the purpose of informing policymakers. This study developed a living review framework which applied supervised text classification and text mining techniques to extract knowledge using the domain literature data between 2022 and 2023. The framework employed statistical classification models trained using iterative training and the best model XGBoost achieved an F1 score of 0.87. Using a small set of 126 manually labelled global articles, XGBoost automatically predicted the labels of 678 Australian articles with high confidence. Then, keyword extraction and unsupervised topic modelling with Latent Dirichlet Allocation (LDA) were performed. Results indicated that there were 2 main research themes in Australian literature: (1) the key waste streams and (2) the resource recovery and recycling of waste. The implication of this framework would benefit the policymakers, researchers, and hazardous waste management organisations by serving as a real time guideline of the current key waste streams and research themes in the literature which allow robust knowledge to be applied to waste management and highlight where the gap in research remains.

3.
IEEE J Biomed Health Inform ; 24(12): 3421-3430, 2020 12.
Article in English | MEDLINE | ID: mdl-32750930

ABSTRACT

The direct analysis of 3D Optical Coherence Tomography (OCT) volumes enables deep learning models (DL) to learn spatial structural information and discover new bio-markers that are relevant to glaucoma. Downsampling 3D input volumes is the state-of-art solution to accommodate for the limited number of training volumes as well as the available computing resources. However, this limits the network's ability to learn from small retinal structures in OCT volumes. In this paper, our goal is to improve the performance by providing guidance to DL model during training in order to learn from finer ocular structures in 3D OCT volumes. Therefore, we propose an end-to-end attention guided 3D DL model for glaucoma detection and estimating visual function from retinal structures. The model consists of three pathways with the same network architecture but different inputs. One input is the original 3D-OCT cube and the other two are computed during training guided by the 3D gradient class activation heatmaps. Each pathway outputs the class-label and the whole model is trained concurrently to minimize the sum of losses from three pathways. The final output is obtained by fusing the predictions of the three pathways. Also, to explore the robustness and generalizability of the proposed model, we apply the model on a classification task for glaucoma detection as well as a regression task to estimate visual field index (VFI) (a value between 0 and 100). A 5-fold cross-validation with a total of 3782 and 10,370 OCT scans is used to train and evaluate the classification and regression models, respectively. The glaucoma detection model achieved an area under the curve (AUC) of 93.8% compared with 86.8% for a baseline model without the attention-guided component. The model also outperformed six different feature based machine learning approaches that use scanner computed measurements for training. Further, we also assessed the contribution of different retinal layers that are relevant to glaucoma. The VFI estimation model achieved a Pearson correlation and median absolute error of 0.75 and 3.6%, respectively, for a test set of size 3100 cubes.


Subject(s)
Glaucoma/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Tomography, Optical Coherence/methods , Databases, Factual , Deep Learning , Humans
4.
IEEE J Biomed Health Inform ; 24(2): 577-585, 2020 02.
Article in English | MEDLINE | ID: mdl-30990451

ABSTRACT

Psoriasis is a chronic skin condition. Its clinical assessment involves four measures: erythema, scales, induration, and area. In this paper, we introduce a scale severity scoring framework for two-dimensional psoriasis skin images. Specifically, we leverage the bag-of-visual words (BoVWs) model for lesion feature extraction using superpixels as key points. BoVWs model is based on building a vocabulary with specific number of words (i.e., codebook size) by using a clustering algorithm with some local features extracted from a constructed set of key points. This is followed by three-class machine learning classifiers for scale scoring using support vector machine (SVM) and random forest. Besides, we examine eight different local color and texture descriptors, namely color histogram, local binary patterns, edge histogram descriptor, color layout descriptor, scalable color descriptor, color and edge directivity descriptor (CEDD), fuzzy color and texture histogram, and brightness and texture directionality histogram. Further, the selection of codebook and superpixel sizes are studied intensively. A psoriasis image set, consisting of 96 images, is used in this study. The conducted experiments show that color descriptors have the highest performance measures for scale severity scoring. This is followed by the combined color and texture descriptors, whereas texture-based descriptors come last. Moreover, K-means algorithm shows better results in vocabulary building than Gaussian mixed model, in terms of accuracy and computations time. Finally, the proposed method yields a scale severity scoring accuracy of 80.81% using the following setup: a superpixel of size [Formula: see text], a combined color and texture descriptor (i.e., CEDD), a constructed codebook of size 128 using K-means, and SVM for scale scoring.


Subject(s)
Psoriasis/physiopathology , Severity of Illness Index , Skin/pathology , Algorithms , Cluster Analysis , Humans , Machine Learning
5.
Comput Med Imaging Graph ; 66: 44-55, 2018 06.
Article in English | MEDLINE | ID: mdl-29524784

ABSTRACT

Psoriasis is a chronic skin disease which can be life-threatening. Accurate severity scoring helps dermatologists to decide on the treatment. In this paper, we present a semi-supervised computer-aided system for automatic erythema severity scoring in psoriasis images. Firstly, the unsupervised stage includes a novel image representation method. We construct a dictionary, which is then used in the sparse representation for local feature extraction. To acquire the final image representation vector, an aggregation method is exploited over the local features. Secondly, the supervised phase is where various multi-class machine learning (ML) classifiers are trained for erythema severity scoring. Finally, we compare the proposed system with two popular unsupervised feature extractor methods, namely: bag of visual words model (BoVWs) and AlexNet pretrained model. Root mean square error (RMSE) and F1 score are used as performance measures for the learned dictionaries and the trained ML models, respectively. A psoriasis image set consisting of 676 images, is used in this study. Experimental results demonstrate that the use of the proposed procedure can provide a setup where erythema scoring is accurate and consistent. Also, it is revealed that dictionaries with large number of atoms and small patch sizes yield the best representative erythema severity features. Further, random forest (RF) outperforms other classifiers with F1 score 0.71, followed by support vector machine (SVM) and boosting with 0.66 and 0.64 scores, respectively. Furthermore, the conducted comparative studies confirm the effectiveness of the proposed approach with improvement of 9% and 12% over BoVWs and AlexNet based features, respectively.


Subject(s)
Erythema/diagnostic imaging , Erythema/physiopathology , Image Interpretation, Computer-Assisted/methods , Psoriasis/diagnostic imaging , Algorithms , Humans , Machine Learning , Severity of Illness Index , Support Vector Machine
6.
J Med Imaging (Bellingham) ; 4(4): 044004, 2017 Oct.
Article in English | MEDLINE | ID: mdl-29152533

ABSTRACT

Psoriasis is a chronic skin disease that is assessed visually by dermatologists. The Psoriasis Area and Severity Index (PASI) is the current gold standard used to measure lesion severity by evaluating four parameters, namely, area, erythema, scaliness, and thickness. In this context, psoriasis skin lesion segmentation is required as the basis for PASI scoring. An automatic lesion segmentation method by leveraging multiscale superpixels and [Formula: see text]-means clustering is outlined. Specifically, we apply a superpixel segmentation strategy on CIE-[Formula: see text] color space using different scales. Also, we suppress the superpixels that belong to nonskin areas. Once similar regions on different scales are obtained, the [Formula: see text]-means algorithm is used to cluster each superpixel scale separately into normal and lesion skin areas. Features from both [Formula: see text] and [Formula: see text] color bands are used in the clustering process. Furthermore, majority voting is performed to fuse the segmentation results from different scales to obtain the final output. The proposed method is extensively evaluated on a set of 457 psoriasis digital images, acquired from the Royal Melbourne Hospital, Melbourne, Australia. Experimental results have shown evidence that the method is very effective and efficient, even when applied to images containing hairy skin and diverse lesion size, shape, and severity. It has also been ascertained that CIE-[Formula: see text] outperforms other color spaces for psoriasis lesion analysis and segmentation. In addition, we use three evaluation metrics, namely, Dice coefficient, Jaccard index, and pixel accuracy where scores of 0.783%, 0.698%, and 86.99% have been achieved by the proposed method for the three metrics, respectively. Finally, compared with existing methods that employ either skin decomposition and support vector machine classifier or Euclidean distance in the hue-chrome plane, our multiscale superpixel-based method achieves markedly better performance with at least 20% accuracy enhancement.

SELECTION OF CITATIONS
SEARCH DETAIL
...