Abstract
Evolutionary computing provides powerful methods for designing pattern recognition systems. This design process is typically based on finite sample data and therefore bears the risk of overfitting. This paper aims at raising the awareness of various types of overfitting and at providing guidelines for how to deal with them. We restrict our considerations to the predominant scenario in which fitness computations are based on point estimates. Three different sources of losing generalization performance when evolving learning machines, namely overfitting to training, test, and final selection data, are identified, discussed, and experimentally demonstrated. The importance of a pristine hold-out data set for the selection of the final result from the evolved candidates is highlighted. It is shown that it may be beneficial to restrict this last selection process to a subset of the evolved candidates.
Originalsprog | Engelsk |
---|---|
Tidsskrift | I E E E Transactions on Evolutionary Computation |
Vol/bind | 17 |
Udgave nummer | 3 |
Sider (fra-til) | 345-352 |
Antal sider | 8 |
ISSN | 1089-778X |
DOI | |
Status | Udgivet - 2013 |