TY - GEN
T1 - Preferences-based choice prediction in evolutionary multi-objective optimization
AU - Aggarwal, Manish
AU - Heinermann, Justin
AU - Oehmcke, Stefan
AU - Kramer, Oliver
PY - 2017/1/1
Y1 - 2017/1/1
N2 - Evolutionary multi-objective algorithms (EMOAs) of the type of NSGA-2 approximate the Pareto-front, after which a decisionmaker (DM) is confounded with the primary task of selecting the best solution amongst all the equally good solutions on the Pareto-front. In this paper, we complement the popular NSGA-2 EMOA by posteriori identifying a DM’s best solution among the candidate solutions on the Pareto-front, generated through NSGA-2. To this end, we employ a preference-based learning approach to learn an abstract ideal reference point of the DM on the multi-objective space, which reflects the compromises the DM makes against a set of conflicting objectives. The solution that is closest to this reference-point is then predicted as the DM’s best solution. The pairwise comparisons of the candidate solutions provides the training information for our learning model. The experimental results on ZDT1 dataset shows that the proposed approach is not only intuitive, but also easy to apply, and robust to inconsistencies in the DM’s preference statements.
AB - Evolutionary multi-objective algorithms (EMOAs) of the type of NSGA-2 approximate the Pareto-front, after which a decisionmaker (DM) is confounded with the primary task of selecting the best solution amongst all the equally good solutions on the Pareto-front. In this paper, we complement the popular NSGA-2 EMOA by posteriori identifying a DM’s best solution among the candidate solutions on the Pareto-front, generated through NSGA-2. To this end, we employ a preference-based learning approach to learn an abstract ideal reference point of the DM on the multi-objective space, which reflects the compromises the DM makes against a set of conflicting objectives. The solution that is closest to this reference-point is then predicted as the DM’s best solution. The pairwise comparisons of the candidate solutions provides the training information for our learning model. The experimental results on ZDT1 dataset shows that the proposed approach is not only intuitive, but also easy to apply, and robust to inconsistencies in the DM’s preference statements.
KW - Multi-objective optimization
KW - NSGA-2
KW - Preference-based learning
KW - Solution selection
UR - http://www.scopus.com/inward/record.url?scp=85017558458&partnerID=8YFLogxK
U2 - 10.1007/978-3-319-55849-3_46
DO - 10.1007/978-3-319-55849-3_46
M3 - Article in proceedings
AN - SCOPUS:85017558458
SN - 9783319558486
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 715
EP - 724
BT - Applications of Evolutionary Computation - 20th European Conference, EvoApplications 2017, Proceedings
A2 - Hidalgo, J.Ignacio
A2 - Cotta, Carlos
A2 - Hu, Ting
A2 - Tonda, Alberto
A2 - Burrelli, Paolo
A2 - Coler, Matt
A2 - Iacca, Giovanni
A2 - Kampouridis, Michael
A2 - Mora Garcia, Antonio M.
A2 - Squillero, Giovanni
A2 - Brabazon, Anthony
A2 - Haasdijk, Evert
A2 - Heinerman, Jacqueline
A2 - D Andreagiovanni, Fabio
A2 - Bacardit, Jaume
A2 - Nguyen, Trung Thanh
A2 - Silva, Sara
A2 - Tarantino, Ernesto
A2 - Esparcia-Alcazar, Anna I.
A2 - Ascheid, Gerd
A2 - Glette, Kyrre
A2 - Cagnoni, Stefano
A2 - Kaufmann, Paul
A2 - de Vega, Francisco Fernandez
A2 - Mavrovouniotis, Michalis
A2 - Zhang, Mengjie
A2 - Divina, Federico
A2 - Sim, Kevin
A2 - Urquhart, Neil
A2 - Schaefer, Robert
PB - Springer Verlag,
T2 - 20th European Conference on the Applications of Evolutionary Computation, EvoApplications 2017
Y2 - 19 April 2017 through 21 April 2017
ER -