A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. The relation between the two approaches allows Adaboost's example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. Even though one may expect that the greedy strategy is very much prone to overfitting, extensive experimental results do not support this guess. The greedy strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable, if not lower, than that exhibited by Adaboost.

Boosting as a Monte Carlo Algorithm

ESPOSITO, Roberto;
2001-01-01

Abstract

A new view of majority voting as a Monte Carlo stochastic algorithm is presented in this paper. The relation between the two approaches allows Adaboost's example weighting strategy to be compared with the greedy covering strategy used for a long time in Machine Learning. Even though one may expect that the greedy strategy is very much prone to overfitting, extensive experimental results do not support this guess. The greedy strategy does not clearly show overfitting, it runs in at least one order of magnitude less time, it reaches zero error on the training set in few trials, and the error on the test set is most of the time comparable, if not lower, than that exhibited by Adaboost.
2001
AI*IA 2001: Advances in Artificial Intelligence
Bari
September 25-28, 2001
2175
11
19
Roberto Esposito; Lorenza Saitta
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/2318/48861
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact