#StackBounty: #boosting #adaboost Adaboost — how does reweighting affect the learning process for the subsequent learner?

Bounty: 100

In Adaboost, when you reweight the samples, how does the training process for the next classifier in the boosting algorithm take in to account the weights? Is it reflected in the loss function of the learner? The ESL book doesn’t really talk about this.

In addition, if say we are using trees as the weak learners, how is each subsequent tree determined? In other words and on top of the reweighted training samples, how do we choose what variables to study at each node for the next tree, how many terminal nodes, etc..?


Get this bounty!!!

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.