Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 45
Filter
Add more filters










Publication year range
1.
Nutrients ; 16(8)2024 Apr 11.
Article in English | MEDLINE | ID: mdl-38674823

ABSTRACT

Changes in an individual's digestive system, hormones, senses of smell and taste, and energy requirement accompanying aging could lead to impaired appetite, but older adults may not notice their risk of nutrient deficiency. When assessing the dietary intake of older adults, it was found that they had more difficulties with short-term recall and open-ended recall and would experience greater fatigue and frustration when compared to younger individuals when completing a lengthy questionnaire. There is a need to develop a brief dietary assessment tool to examine the nutritional needs of older adults. In this study, we aimed to assess the diet of Hong Kong older adults using the short FFQ and examine its reproducibility and relative validity as a dietary assessment tool. Dietary data of 198 older adults were collected via FFQs and three-day dietary records. Correlation analyses, cross-tabulation, one-sample t-tests, and linear regression analyses were used to evaluate the relative validity of the short FFQ. In general, the short FFQ was accurate in assessing the intake of phosphorus, water, grains, and wine, as shown by a significant correlation (>0.7) between values reported in the FFQs and dietary records; good agreement (more than 50% of observations belonged to the same quartile) and insignificant differences detected with the one-sample t-tests and linear regression analyses were observed for the above four variables. Additionally, the intake of proteins, carbohydrates, total fat, magnesium, and eggs in terms of the values reported in the FFQs and dietary records showed good agreement.


Subject(s)
Diet , Humans , Hong Kong , Reproducibility of Results , Female , Aged , Male , Surveys and Questionnaires/standards , Diet/statistics & numerical data , Diet Records , Diet Surveys/standards , Nutrition Assessment , Aged, 80 and over , Asian People , Middle Aged , Feeding Behavior , East Asian People
2.
Artif Intell Med ; 146: 102694, 2023 12.
Article in English | MEDLINE | ID: mdl-38042612

ABSTRACT

Unsupervised domain adaptation (UDA) plays a crucial role in transferring knowledge gained from a labeled source domain to effectively apply it in an unlabeled and diverse target domain. While UDA commonly involves training on data from both domains, accessing labeled data from the source domain is frequently constrained, citing concerns related to patient data privacy or intellectual property. The source-free UDA (SFUDA) can be promising to sidestep this difficulty. However, without the source domain supervision, the SFUDA methods can easily fall into the dilemma of "winner takes all", in which the majority category can dominate the deep segmentor, and the minority categories are largely ignored. In addition, the over-confident pseudo-label noise in self-training-based UDA is a long-lasting problem. To sidestep these difficulties, we propose a novel class-balanced complementary self-training (CBCOST) framework for SFUDA segmentation. Specifically, we jointly optimize the pseudo-label-based self-training with two mutually reinforced components. The first class-wise balanced pseudo-label training (CBT) explicitly exploits the fine-grained class-wise confidence to select the class-wise balanced pseudo-labeled pixels with the adaptive within-class thresholds. Second, to alleviate the pseudo-labeled noise, we propose a complementary self-training (COST) to exclude the classes that do not belong to, with a heuristic complementary label selection scheme. We evaluated our CBCOST framework on both 2D and 3D cross-modality cardiac anatomical segmentation tasks and brain tumor segmentation tasks. Our experimental results showed that our CBCOST performs better than existing SFUDA methods and yields similar performance, compared with UDA methods with the source data.

3.
Cornea ; 2023 Nov 28.
Article in English | MEDLINE | ID: mdl-38016014

ABSTRACT

PURPOSE: ChatGPT is a commonly used source of information by patients and clinicians. However, it can be prone to error and requires validation. We sought to assess the quality and accuracy of information regarding corneal transplantation and Fuchs dystrophy from 2 iterations of ChatGPT, and whether its answers improve over time. METHODS: A total of 10 corneal specialists collaborated to assess responses of the algorithm to 10 commonly asked questions related to endothelial keratoplasty and Fuchs dystrophy. These questions were asked from both ChatGPT-3.5 and its newer generation, GPT-4. Assessments tested quality, safety, accuracy, and bias of information. Chi-squared, Fisher exact tests, and regression analyses were conducted. RESULTS: We analyzed 180 valid responses. On a 1 (A+) to 5 (F) scale, the average score given by all specialists across questions was 2.5 for ChatGPT-3.5 and 1.4 for GPT-4, a significant improvement (P < 0.0001). Most responses by both ChatGPT-3.5 (61%) and GPT-4 (89%) used correct facts, a proportion that significantly improved across iterations (P < 0.00001). Approximately a third (35%) of responses from ChatGPT-3.5 were considered against the scientific consensus, a notable rate of error that decreased to only 5% of answers from GPT-4 (P < 0.00001). CONCLUSIONS: The quality of responses in ChatGPT significantly improved between versions 3.5 and 4, and the odds of providing information against the scientific consensus decreased. However, the technology is still capable of producing inaccurate statements. Corneal specialists are uniquely positioned to assist users to discern the veracity and application of such information.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 2835-2848, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35635808

ABSTRACT

Label noise is ubiquitous in many real-world scenarios which often misleads training algorithm and brings about the degraded classification performance. Therefore, many approaches have been proposed to correct the loss function given corrupted labels to combat such label noise. Among them, a trend of works achieve this goal by unbiasedly estimating the data centroid, which plays an important role in constructing an unbiased risk estimator for minimization. However, they usually handle the noisy labels in different classes all at once, so the local information inherited by each class is ignored which often leads to unsatisfactory performance. To address this defect, this paper presents a novel robust learning algorithm dubbed "Class-Wise Denoising" (CWD), which tackles the noisy labels in a class-wise way to ease the entire noise correction task. Specifically, two virtual auxiliary sets are respectively constructed by presuming that the positive and negative labels in the training set are clean, so the original false-negative labels and false-positive ones are tackled separately. As a result, an improved centroid estimator can be designed which helps to yield more accurate risk estimator. Theoretically, we prove that: 1) the variance in centroid estimation can often be reduced by our CWD when compared with existing methods with unbiased centroid estimator; and 2) the performance of CWD trained on the noisy set will converge to that of the optimal classifier trained on the clean set with a convergence rate [Formula: see text] where n is the number of the training examples. These sound theoretical properties critically enable our CWD to produce the improved classification performance under label noise, which is also demonstrated by the comparisons with ten representative state-of-the-art methods on a variety of benchmark datasets.

5.
Article in English | MEDLINE | ID: mdl-35895653

ABSTRACT

Unsupervised domain adaptation (UDA) has been successfully applied to transfer knowledge from a labeled source domain to target domains without their labels. Recently introduced transferable prototypical networks (TPNs) further address class-wise conditional alignment. In TPN, while the closeness of class centers between source and target domains is explicitly enforced in a latent space, the underlying fine-grained subtype structure and the cross-domain within-class compactness have not been fully investigated. To counter this, we propose a new approach to adaptively perform a fine-grained subtype-aware alignment to improve the performance in the target domain without the subtype label in both domains. The insight of our approach is that the unlabeled subtypes in a class have the local proximity within a subtype while exhibiting disparate characteristics because of different conditional and label shifts. Specifically, we propose to simultaneously enforce subtype-wise compactness and class-wise separation, by utilizing intermediate pseudo-labels. In addition, we systematically investigate various scenarios with and without prior knowledge of subtype numbers and propose to exploit the underlying subtype structure. Furthermore, a dynamic queue framework is developed to evolve the subtype cluster centroids steadily using an alternative processing scheme. Experimental results, carried out with multiview congenital heart disease data and VisDA and DomainNet, show the effectiveness and validity of our subtype-aware UDA, compared with state-of-the-art UDA methods.

6.
Article in English | MEDLINE | ID: mdl-35704544

ABSTRACT

There has been a growing interest in unsupervised domain adaptation (UDA) to alleviate the data scalability issue, while the existing works usually focus on classifying independently discrete labels. However, in many tasks (e.g., medical diagnosis), the labels are discrete and successively distributed. The UDA for ordinal classification requires inducing non-trivial ordinal distribution prior to the latent space. Target for this, the partially ordered set (poset) is defined for constraining the latent vector. Instead of the typically i.i.d. Gaussian latent prior, in this work, a recursively conditional Gaussian (RCG) set is proposed for ordered constraint modeling, which admits a tractable joint distribution prior. Furthermore, we are able to control the density of content vectors that violate the poset constraint by a simple "three-sigma rule". We explicitly disentangle the cross-domain images into a shared ordinal prior induced ordinal content space and two separate source/target ordinal-unrelated spaces, and the self-training is worked on the shared space exclusively for ordinal-aware domain alignment. Extensive experiments on UDA medical diagnoses and facial age estimation demonstrate its effectiveness.

7.
IEEE Trans Neural Netw Learn Syst ; 33(1): 75-88, 2022 Jan.
Article in English | MEDLINE | ID: mdl-33048763

ABSTRACT

Graph-based methods have achieved impressive performance on semisupervised classification (SSC). Traditional graph-based methods have two main drawbacks. First, the graph is predefined before training a classifier, which does not leverage the interactions between the classifier training and similarity matrix learning. Second, when handling high-dimensional data with noisy or redundant features, the graph constructed in the original input space is actually unsuitable and may lead to poor performance. In this article, we propose an SSC method with novel graph construction (SSC-NGC), in which the similarity matrix is optimized in both label space and an additional subspace to get a better and more robust result than in original data space. Furthermore, to obtain a high-quality subspace, we learn the projection matrix of the additional subspace by preserving the local and global structure of the data. Finally, we intergrade the classifier training, the graph construction, and the subspace learning into a unified framework. With this framework, the classifier parameters, similarity matrix, and projection matrix of subspace are adaptively learned in an iterative scheme to obtain an optimal joint result. We conduct extensive comparative experiments against state-of-the-art methods over multiple real-world data sets. Experimental results demonstrate the superiority of the proposed method over other state-of-the-art algorithms.

8.
IEEE Trans Pattern Anal Mach Intell ; 44(6): 2841-2855, 2022 06.
Article in English | MEDLINE | ID: mdl-33320809

ABSTRACT

In this paper, we propose a general framework termed centroid estimation with guaranteed efficiency (CEGE) for weakly supervised learning (WSL) with incomplete, inexact, and inaccurate supervision. The core of our framework is to devise an unbiased and statistically efficient risk estimator that is applicable to various weak supervision. Specifically, by decomposing the loss function (e.g., the squared loss and hinge loss) into a label-independent term and a label-dependent term, we discover that only the latter is influenced by the weak supervision and is related to the centroid of the entire dataset. Therefore, by constructing two auxiliary pseudo-labeled datasets with synthesized labels, we derive unbiased estimates of centroid based on the two auxiliary datasets, respectively. These two estimates are further linearly combined with a properly decided coefficient which makes the final combined estimate not only unbiased but also statistically efficient. This is better than some existing methods that only care about the unbiasedness of estimation but ignore the statistical efficiency. The good statistical efficiency of the derived estimator is guaranteed as we theoretically prove that it acquires the minimum variance when estimating the centroid. As a result, intensive experimental results on a large number of benchmark datasets demonstrate that our CEGE generally obtains better performance than the existing approaches related to typical WSL problems including semi-supervised learning, positive-unlabeled learning, multiple instance learning, and label noise learning.


Subject(s)
Algorithms , Supervised Machine Learning , Benchmarking
9.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5243-5260, 2022 09.
Article in English | MEDLINE | ID: mdl-33945470

ABSTRACT

Deep learning recognition approaches can potentially perform better if we can extract a discriminative representation that controllably separates nuisance factors. In this paper, we propose a novel approach to explicitly enforce the extracted discriminative representation d, extracted latent variation l (e,g., background, unlabeled nuisance attributes), and semantic variation label vector s (e.g., labeled expressions/pose) to be independent and complementary to each other. We can cast this problem as an adversarial game in the latent space of an auto-encoder. Specifically, with the to-be-disentangled s, we propose to equip an end-to-end conditional adversarial network with the ability to decompose an input sample into d and l. However, we argue that maximizing the cross-entropy loss of semantic variation prediction from d is not sufficient to remove the impact of s from d, and that the uniform-target and entropy regularization are necessary. A collaborative mutual information regularization framework is further proposed to avoid unstable adversarial training. It is able to minimize the differentiable mutual information between the variables to enforce independence. The proposed discriminative representation inherits the desired tolerance property guided by prior knowledge of the task. Our proposed framework achieves top performance on diverse recognition tasks, including digits classification, large-scale face recognition on LFW and IJB-A datasets, and face recognition tolerant to changes in lighting, makeup, disguise, etc.


Subject(s)
Facial Recognition , Pattern Recognition, Automated , Algorithms , Lighting
10.
Article in English | MEDLINE | ID: mdl-33621169

ABSTRACT

This paper studies instance-dependent Positive and Unlabeled (PU) classification, where whether a positive example will be labeled (indicated by s) is not only related to the class label y, but also depends on the observation x. Therefore, the labeling probability on positive examples is not uniform as previous works assumed, but is biased to some simple or critical data points. To depict the above dependency relationship, a graphical model is built in this paper which further leads to a maximization problem on the induced likelihood function regarding P(s,y|x). By utilizing the well-known EM and Adam optimization techniques, the labeling probability of any positive example P(s=1|y=1,x) as well as the classifier induced by P(y|x) can be acquired. Theoretically, we prove that the critical solution always exists, and is locally unique for linear model if some sufficient conditions are met. Moreover, we upper bound the generalization error for both linear logistic and non-linear network instantiations of our algorithm. Empirically, we compare our method with state-of-the-art instance-independent and instance-dependent PU algorithms on a wide range of synthetic, benchmark and real-world datasets, and the experimental results firmly demonstrate the advantage of the proposed method over the existing PU approaches.

11.
IEEE Trans Pattern Anal Mach Intell ; 43(10): 3446-3461, 2021 Oct.
Article in English | MEDLINE | ID: mdl-32248094

ABSTRACT

Learning-based lossy image compression usually involves the joint optimization of rate-distortion performance, and requires to cope with the spatial variation of image content and contextual dependence among learned codes. Traditional entropy models can spatially adapt the local bit rate based on the image content, but usually are limited in exploiting context in code space. On the other hand, most deep context models are computationally very expensive and cannot efficiently perform decoding over the symbols in parallel. In this paper, we present a content-weighted encoder-decoder model, where the channel-wise multi-valued quantization is deployed for the discretization of the encoder features, and an importance map subnet is introduced to generate the importance masks for spatially varying code pruning. Consequently, the summation of importance masks can serve as an upper bound of the length of bitstream. Furthermore, the quantized representations of the learned code and importance map are still spatially dependent, which can be losslessly compressed using arithmetic coding. To compress the codes effectively and efficiently, we propose an upper-triangular masked convolutional network (triuMCN) for large context modeling. Experiments show that the proposed method can produce visually much better results, and performs favorably against deep and traditional lossy image compression approaches.

12.
IEEE Trans Cybern ; 51(2): 534-547, 2021 Feb.
Article in English | MEDLINE | ID: mdl-31170087

ABSTRACT

Multiview learning has been widely studied in various fields and achieved outstanding performances in comparison to many single-view-based approaches. In this paper, a novel multiview learning method based on the Gaussian process latent variable model (GPLVM) is proposed. In contrast to existing GPLVM methods which only assume that there are transformations from the latent variable to the multiple observed inputs, our proposed method simultaneously takes a back constraint into account, encoding multiple observations to the latent variable by enjoying the Gaussian process (GP) prior. Particularly, to overcome the difficulty of the covariance matrix calculation in the encoder, a linear projection is designed to map different observations to a consistent subspace first. The obtained variable in this subspace is then projected to the latent variable in the manifold space with the GP prior. Furthermore, different from most GPLVM methods which strongly assume that the covariance matrices follow a certain kernel function, for example, radial basis function (RBF), we introduce a multikernel strategy to design the covariance matrix, being more reasonable and adaptive for the data representation. In order to apply the presented approach to the classification, a discriminative prior is also embedded to the learned latent variables to encourage samples belonging to the same category to be close and those belonging to different categories to be far. Experimental results on three real-world databases substantiate the effectiveness and superiority of the proposed method compared with state-of-the-art approaches.

13.
IEEE Trans Cybern ; 51(4): 2019-2031, 2021 Apr.
Article in English | MEDLINE | ID: mdl-31180903

ABSTRACT

Healthcare question answering (HQA) system plays a vital role in encouraging patients to inquire for professional consultation. However, there are some challenging factors in learning and representing the question corpus of HQA datasets, such as high dimensionality, sparseness, noise, nonprofessional expression, etc. To address these issues, we propose an inception convolutional autoencoder model for Chinese healthcare question clustering (ICAHC). First, we select a set of kernels with different sizes using convolutional autoencoder networks to explore both the diversity and quality in the clustering ensemble. Thus, these kernels encourage to capture diverse representations. Second, we design four ensemble operators to merge representations based on whether they are independent, and input them into the encoder using different skip connections. Third, it maps features from the encoder into a lower-dimensional space, followed by clustering. We conduct comparative experiments against other clustering algorithms on a Chinese healthcare dataset. Experimental results show the effectiveness of ICAHC in discovering better clustering solutions. The results can be used in the prediction of patients' conditions and the development of an automatic HQA system.


Subject(s)
Cluster Analysis , Delivery of Health Care/methods , Diagnosis, Computer-Assisted/methods , Neural Networks, Computer , Algorithms , China , Humans
14.
Article in English | MEDLINE | ID: mdl-32305914

ABSTRACT

Precise estimation of the probabilistic structure of natural images plays an essential role in image compression. Despite the recent remarkable success of end-to-end optimized image compression, the latent codes are usually assumed to be fully statistically factorized in order to simplify entropy modeling. However, this assumption generally does not hold true and may hinder compression performance. Here we present contextbased convolutional networks (CCNs) for efficient and effective entropy modeling. In particular, a 3D zigzag scanning order and a 3D code dividing technique are introduced to define proper coding contexts for parallel entropy decoding, both of which boil down to place translation-invariant binary masks on convolution filters of CCNs. We demonstrate the promise of CCNs for entropy modeling in both lossless and lossy image compression. For the former, we directly apply a CCN to the binarized representation of an image to compute the Bernoulli distribution of each code for entropy estimation. For the latter, the categorical distribution of each code is represented by a discretized mixture of Gaussian distributions, whose parameters are estimated by three CCNs. We then jointly optimize the CCNbased entropy model along with analysis and synthesis transforms for rate-distortion performance. Experiments on the Kodak and Tecnick datasets show that our methods powered by the proposed CCNs generally achieve comparable compression performance to the state-of-the-art while being much faster.

15.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4791-4805, 2020 Nov.
Article in English | MEDLINE | ID: mdl-31902779

ABSTRACT

Due to the powerful capability of the data representation, deep learning has achieved a remarkable performance in supervised hash function learning. However, most of the existing hashing methods focus on point-to-point matching that is too strict and unnecessary. In this article, we propose a novel deep supervised hashing method by relaxing the matching between each pair of instances to a point-to-angle way. Specifically, an inner product is introduced to asymmetrically measure the similarity and dissimilarity between the real-valued output and the binary code. Different from existing methods that strictly enforce each element in the real-valued output to be either +1 or -1, we only encourage the output to be close to its corresponding semantic-related binary code under the cross-angle. This asymmetric product not only projects both the real-valued output and the binary code into the same Hamming space but also relaxes the output with wider choices. To further exploit the semantic affinity, we propose a novel Hamming-distance-based triplet loss, efficiently making a ranking for the positive and negative pairs. An algorithm is then designed to alternatively achieve optimal deep features and binary codes. Experiments on four real-world data sets demonstrate the effectiveness and superiority of our approach to the state of the art.

16.
IEEE Trans Neural Netw Learn Syst ; 31(4): 1387-1400, 2020 04.
Article in English | MEDLINE | ID: mdl-31265410

ABSTRACT

The class imbalance problem has become a leading challenge. Although conventional imbalance learning methods are proposed to tackle this problem, they have some limitations: 1) undersampling methods suffer from losing important information and 2) cost-sensitive methods are sensitive to outliers and noise. To address these issues, we propose a hybrid optimal ensemble classifier framework that combines density-based undersampling and cost-effective methods through exploring state-of-the-art solutions using multi-objective optimization algorithm. Specifically, we first develop a density-based undersampling method to select informative samples from the original training data with probability-based data transformation, which enables to obtain multiple subsets following a balanced distribution across classes. Second, we exploit the cost-sensitive classification method to address the incompleteness of information problem via modifying weights of misclassified minority samples rather than the majority ones. Finally, we introduce a multi-objective optimization procedure and utilize connections between samples to self-modify the classification result using an ensemble classifier framework. Extensive comparative experiments conducted on real-world data sets demonstrate that our method outperforms the majority of imbalance and ensemble classification approaches.

17.
IEEE Trans Cybern ; 50(6): 2872-2885, 2020 Jun.
Article in English | MEDLINE | ID: mdl-30596592

ABSTRACT

Clustering ensemble (CE) takes multiple clustering solutions into consideration in order to effectively improve the accuracy and robustness of the final result. To reduce redundancy as well as noise, a CE selection (CES) step is added to further enhance performance. Quality and diversity are two important metrics of CES. However, most of the CES strategies adopt heuristic selection methods or a threshold parameter setting to achieve tradeoff between quality and diversity. In this paper, we propose a transfer CES (TCES) algorithm which makes use of the relationship between quality and diversity in a source dataset, and transfers it into a target dataset based on three objective functions. Furthermore, a multiobjective self-evolutionary process is designed to optimize these three objective functions. Finally, we construct a transfer CE framework (TCE-TCES) based on TCES to obtain better clustering results. The experimental results on 12 transfer clustering tasks obtained from the 20newsgroups dataset show that TCE-TCES can find a better tradeoff between quality and diversity, as well as obtaining more desirable clustering results.

18.
IEEE Trans Cybern ; 49(2): 366-379, 2019 Feb.
Article in English | MEDLINE | ID: mdl-29989979

ABSTRACT

High dimensional data classification with very limited labeled training data is a challenging task in the area of data mining. In order to tackle this task, we first propose a feature selection-based semi-supervised classifier ensemble framework (FSCE) to perform high dimensional data classification. Then, we design an adaptive semi-supervised classifier ensemble framework (ASCE) to improve the performance of FSCE. When compared with FSCE, ASCE is characterized by an adaptive feature selection process, an adaptive weighting process (AWP), and an auxiliary training set generation process (ATSGP). The adaptive feature selection process generates a set of compact subspaces based on the selected attributes obtained by the feature selection algorithms, while the AWP associates each basic semi-supervised classifier in the ensemble with a weight value. The ATSGP enlarges the training set with unlabeled samples. In addition, a set of nonparametric tests are adopted to compare multiple semi-supervised classifier ensemble (SSCE)approaches over different datasets. The experiments on 20 high dimensional real-world datasets show that: 1) the two adaptive processes in ASCE are useful for improving the performance of the SSCE approach and 2) ASCE works well on high dimensional datasets with very limited labeled training data, and outperforms most state-of-the-art SSCE approaches.

19.
IEEE Trans Cybern ; 49(2): 403-416, 2019 Feb.
Article in English | MEDLINE | ID: mdl-29990215

ABSTRACT

Traditional ensemble learning approaches explore the feature space and the sample space, respectively, which will prevent them to construct more powerful learning models for noisy real-world dataset classification. The random subspace method only search for the selection of features. Meanwhile, the bagging approach only search for the selection of samples. To overcome these limitations, we propose the hybrid incremental ensemble learning (HIEL) approach which takes into consideration the feature space and the sample space simultaneously to handle noisy dataset. Specifically, HIEL first adopts the bagging technique and linear discriminant analysis to remove noisy attributes, and generates a set of bootstraps and the corresponding ensemble members in the subspaces. Then, the classifiers are selected incrementally based on a classifier-specific criterion function and an ensemble criterion function. The corresponding weights for the classifiers are assigned during the same process. Finally, the final label is summarized by a weighted voting scheme, which serves as the final result of the classification. We also explore various classifier-specific criterion functions based on different newly proposed similarity measures, which will alleviate the effect of noisy samples on the distance functions. In addition, the computational cost of HIEL is analyzed theoretically. A set of nonparametric tests are adopted to compare HIEL and other algorithms over several datasets. The experiment results show that HIEL performs well on the noisy datasets. HIEL outperforms most of the compared classifier ensemble methods on 14 out of 24 noisy real-world UCI and KEEL datasets.

20.
IEEE Trans Cybern ; 49(6): 2280-2293, 2019 Jun.
Article in English | MEDLINE | ID: mdl-29993923

ABSTRACT

Classification of high-dimensional data with very limited labels is a challenging task in the field of data mining and machine learning. In this paper, we propose the multiobjective semisupervised classifier ensemble (MOSSCE) approach to address this challenge. Specifically, a multiobjective subspace selection process (MOSSP) in MOSSCE is first designed to generate the optimal combination of feature subspaces. Three objective functions are then proposed for MOSSP, which include the relevance of features, the redundancy between features, and the data reconstruction error. Then, MOSSCE generates an auxiliary training set based on the sample confidence to improve the performance of the classifier ensemble. Finally, the training set, combined with the auxiliary training set, is used to select the optimal combination of basic classifiers in the ensemble, train the classifier ensemble, and generate the final result. In addition, diversity analysis of the ensemble learning process is applied, and a set of nonparametric statistical tests is adopted for the comparison of semisupervised classification approaches on multiple datasets. The experiments on 12 gene expression datasets and two large image datasets show that MOSSCE has a better performance than other state-of-the-art semisupervised classifiers on high-dimensional data.

SELECTION OF CITATIONS
SEARCH DETAIL
...