Multiple Classifier Systems: 9th International Workshop, MCS by Santi Seguí, Laura Igual, Jordi Vitrià (auth.), Neamat El

By Santi Seguí, Laura Igual, Jordi Vitrià (auth.), Neamat El Gayar, Josef Kittler, Fabio Roli (eds.)

This booklet constitutes the complaints of the ninth foreign Workshop on a number of Classifier platforms, MCS 2010, held in Cairo, Egypt, in April 2010. The 31 papers awarded have been rigorously reviewed and chosen from 50 submissions. The contributions are geared up into classes facing classifier blend and classifier choice, range, bagging and boosting, mixture of a number of kernels, and functions.

Show description

Read or Download Multiple Classifier Systems: 9th International Workshop, MCS 2010, Cairo, Egypt, April 7-9, 2010. Proceedings PDF

Best computers books

Application and Theory of Petri Nets 1993: 14th International Conference Chicago, Illinois, USA, June 21–25, 1993 Proceedings

This quantity comprises the complaints of the 14th foreign convention onApplication and idea of Petri Nets. the purpose of the Petri internet meetings is to create a discussion board for discussing growth within the software and conception of Petri nets. normally, the meetings have 150-200 individuals, one 3rd of whom come from undefined, whereas the remainder are from universities and learn institutes.

Digital Image processing.6th.ed

The 6th variation has been revised and prolonged. the full textbook is now essentially partitioned into uncomplicated and complex fabric to be able to do something about the ever-increasing box of electronic photo processing. during this means, you could first paintings your manner throughout the uncomplicated rules of electronic picture processing with no getting beaten through the wealth of the cloth after which expand your reports to chose subject matters of curiosity.

Additional resources for Multiple Classifier Systems: 9th International Workshop, MCS 2010, Cairo, Egypt, April 7-9, 2010. Proceedings

Example text

Moreover we introduce a new criterion for splitting the features based on maximizing the strength of the views and their diversity to take advantage of the co-training paradigm. As follows we describe our proposed measures in more details. 1 Confidence of the Views The first requirement for successful co-training is that the features are redundant enough, that is each view is strong enough to perform classification on its own. Based on that hypothesis we propose a genetic algorithm to select the split that maximizes the strength of the views.

E. so that it is not necessarily the case that feature sets within the individual classifiers are fully coincident) then it can be shown that linear classifier combination (eg Sum Rule, Product Rule) is either equivalent to, or bounded by, back-projection, the inverse opera1 M Σi=1 fi (xi , y). However, this introduces tion to Radon projection; pb (X n ) = M n an axially aligned artefact, A(X ) = Σi dxi . all dX ni , that is a consequence of the fact that the Radon projections induced by feature selection represent only 46 D.

In this case, each feature set is sufficient to perform classification and the views are truly independent. For example in an email spam classification problem, one view may contain the features describing the subject of the email and the other may contain the features describing the body. Natural splits satisfy the co-training requirements proposed by Blum and Mitchell, they showed that using unlabeled data for co-training improves the performance when a natural split exists[2]. New Feature Splitting Criteria for Co-training 25 Input: – L: a small set of labeled example – U: a large set of unlabeled example – V1, V2: two sets of describing the example Algorithm: – – – – – – – – – Create a pool U’ by randomly choosing u examples from U Loop for k iterations Train Classifier C1 from L based on V1 Train Classifier C2 from L based on V2 C1 predicts the class of examples from U’ based on V1 and chooses the most confidently predicted p positive and n negative examples E1 C2 predicts the class of examples from U’ based on V2 and chooses the most confidently predicted p positive and n negative examples E2 E1 and E2 are removed from U’ and added with their labels to L Randomly chose 2p+2n examples from U to replenish U’ End Fig.

Download PDF sample

Rated 4.29 of 5 – based on 24 votes