A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. Relation between the two approaches allows Adaboost's example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. The greedy covering strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable to that exhibited by AdaBoost.
Is a Greedy Covering Strategy an Extreme Boosting?
ESPOSITO, Roberto;
2002-01-01
Abstract
A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. Relation between the two approaches allows Adaboost's example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. The greedy covering strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable to that exhibited by AdaBoost.File in questo prodotto:
Non ci sono file associati a questo prodotto.
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.