In Adaboost, when you reweight the samples, how does the training process for the next classifier in the boosting algorithm take in to account the weights? Is it reflected in the loss function of the learner? The ESL book doesn’t really talk about this.
In addition, if say we are using trees as the weak learners, how is each subsequent tree determined? In other words and on top of the reweighted training samples, how do we choose what variables to study at each node for the next tree, how many terminal nodes, etc..?