Abstract
Evolutionary computing provides powerful methods for designing pattern recognition systems. This design process is typically based on finite sample data and therefore bears the risk of overfitting. This paper aims at raising the awareness of various types of overfitting and at providing guidelines for how to deal with them. We restrict our considerations to the predominant scenario in which fitness computations are based on point estimates. Three different sources of losing generalization performance when evolving learning machines, namely overfitting to training, test, and final selection data, are identified, discussed, and experimentally demonstrated. The importance of a pristine hold-out data set for the selection of the final result from the evolved candidates is highlighted. It is shown that it may be beneficial to restrict this last selection process to a subset of the evolved candidates.
Original language | English |
---|---|
Journal | I E E E Transactions on Evolutionary Computation |
Volume | 17 |
Issue number | 3 |
Pages (from-to) | 345-352 |
Number of pages | 8 |
ISSN | 1089-778X |
DOIs | |
Publication status | Published - 2013 |