Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters











Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 37(1): 107-20, 2015 Jan.
Article in English | MEDLINE | ID: mdl-26353212

ABSTRACT

Multi-label learning deals with the problem where each example is represented by a single instance (feature vector) while associated with a set of class labels. Existing approaches learn from multi-label data by manipulating with identical feature set, i.e. the very instance representation of each example is employed in the discrimination processes of all class labels. However, this popular strategy might be suboptimal as each label is supposed to possess specific characteristics of its own. In this paper, another strategy to learn from multi-label data is studied, where label-specific features are exploited to benefit the discrimination of different class labels. Accordingly, an intuitive yet effective algorithm named LIFT, i.e. multi-label learning with Label specific Features, is proposed. LIFT firstly constructs features specific to each label by conducting clustering analysis on its positive and negative instances, and then performs training and testing by querying the clustering results. Comprehensive experiments on a total of 17 benchmark data sets clearly validate the superiority of LIFT against other well-established multi-label learning algorithms as well as the effectiveness of label-specific features.

2.
IEEE Trans Syst Man Cybern B Cybern ; 41(6): 1612-26, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21708503

ABSTRACT

Co-training is one of the major semi-supervised learning paradigms that iteratively trains two classifiers on two different views, and uses the predictions of either classifier on the unlabeled examples to augment the training set of the other. During the co-training process, especially in initial rounds when the classifiers have only mediocre accuracy, it is quite possible that one classifier will receive labels on unlabeled examples erroneously predicted by the other classifier. Therefore, the performance of co-training style algorithms is usually unstable. In this paper, the problem of how to reliably communicate labeling information between different views is addressed by a novel co-training algorithm named COTRADE. In each labeling round, COTRADE carries out the label communication process in two steps. First, confidence of either classifier's predictions on unlabeled examples is explicitly estimated based on specific data editing techniques. Secondly, a number of predicted labels with higher confidence of either classifier are passed to the other one, where certain constraints are imposed to avoid introducing undesirable classification noise. Experiments on several real-world datasets across three domains show that COTRADE can effectively exploit unlabeled data to achieve better generalization performance.

SELECTION OF CITATIONS
SEARCH DETAIL