Pattern Classification Using Ensemble Methods1. Introduction to pattern classification. 1.1. Pattern classification. 1.2. Induction algorithms. 1.3. Rule induction. 1.4. Decision trees. 1.5. Bayesian methods. 1.6. Other induction methods -- 2. Introduction to ensemble learning. 2.1. Back to the roots. 2.2. The wisdom of crowds. 2.3. The bagging algorithm. 2.4. The boosting algorithm. 2.5. The AdaBoost algorithm. 2.6. No free lunch theorem and ensemble learning. 2.7. Bias-variance decomposition and ensemble learning. 2.8. Occam's razor and ensemble learning. 2.9. Classifier dependency. 2.10. Ensemble methods for advanced classification tasks -- 3. Ensemble classification. 3.1. Fusions methods. 3.2. Selecting classification. 3.3. Mixture of experts and meta learning -- 4. Ensemble diversity. 4.1. Overview. 4.2. Manipulating the inducer. 4.3. Manipulating the training samples. 4.4. Manipulating the target attribute representation. 4.5. Partitioning the search space. 4.6. Multi-inducers. 4.7. Measuring the diversity -- 5. Ensemble selection. 5.1. Ensemble selection. 5.2. Pre selection of the ensemble size. 5.3. Selection of the ensemble size while training. 5.4. Pruning - post selection of the ensemble size -- 6. Error correcting output codes. 6.1. Code-matrix decomposition of multiclass problems. 6.2. Type I - training an ensemble given a code-matrix. 6.3. Type II - adapting code-matrices to the multiclass problems -- 7. Evaluating ensembles of classifiers. 7.1. Generalization error. 7.2. Computational complexity. 7.3. Interpretability of the resulting ensemble. 7.4. Scalability to large datasets. 7.5. Robustness. 7.6. Stability. 7.7. Flexibility. 7.8. Usability. 7.9. Software availability. 7.10. Which ensemble method should be used? |
Other editions - View all
Common terms and phrases
accuracy AdaBoost application approach Artificial Intelligence bagging base classifiers base inducer Bayesian binary classifiers boosting algorithm Breiman clas class label clustering code-matrix codeword Conference on Machine constructed cross-validation Data Mining databases dataset Decision Stump decision trees decomposition Dietterich Discovery and Data distribution domains ECOC ensemble methods error rate evaluation example feature selection feature subsets function genetic algorithms Hadamard matrix Hamming distance IEEE Transactions induction algorithm input attributes instance space International Conference Iris K-means algorithm Knowledge Discovery Kohavi learner learning algorithm Machine Learning Maimon matrix misclassified Morgan Kaufmann Naïve Bayes neural networks node number of instances number of iterations optimal original training set output codes overfitting partitioning Pattern Recognition performance positive prediction probability Proc Proceedings pruning Qrecall Random Forest randomly RapidMiner Research Rokach sample Statistics strategy subspace support vector machines Systems target attribute technique test set training error tuples variance vector voting weight