ORCID

Abstract

Evolutionary algorithms are often highly dependent on the correct setting of their parameters, and benchmarking different parametrisations allows a user to identify which parameters offer the best performance on their given problem. Visualisation offers a way of presenting the results of such benchmarking so that a non-expert user can understand how their algorithm is performing. By examining the characteristics of their algorithm, such as convergence and diversity, the user can learn how effective their chosen algorithm parametrisation is. This paper presents a technique intended to offer this insight, by presenting the relative performance of a set of EAs optimising the same multi-objective problem in a simple visualisation. The visualisation characterises the behaviour of the algorithm in terms of known performance indicators drawn from the literature, and is capable of visualising the optimisation of many-objective problems also. The method is demonstrated with benchmark test problems from the popular DTLZ and CEC 2009 problem suites, optimising them with different parametrisations of both NSGA-II and NSGA-III, and it is shown that known characteristics of optimisers solving these problems can be observed in the visualisations resulting.

DOI

10.1016/j.asoc.2019.105902

Publication Date

2019-11-28

Publication Title

Applied Soft Computing

ISSN

1568-4946

Embargo Period

2020-11-27

Organisational Unit

School of Engineering, Computing and Mathematics

Keywords

Benchmarking, Parametrisation, Visualisation, Many-Objective, Multi-Objective, Optimisation

Share

COinS