J2-4460

Automated selection and configuration of single-objective continuous optimization algorithms
Avtomatizirana izbira in konfiguracija eno-kriterijskih zveznih optimizacijskih algoritmov

No. of contract:

J2-4460
Duration:
from 01.10.2022 to 30.09.2025

Contact:

In recent years, optimization has become an important tool in many research fields where computer simulations are used to evaluate solutions to a problem. Unfortunately, there is no single optimization algorithm that is the best at solving all possible problems, also known as the “no free lunch” theorem. Choosing the best optimization algorithm for a particular instance of an optimization problem is therefore a challenging task. There are a large number of optimization algorithms that the user can choose from. If we intend to find the best algorithm for a problem instance by trying to optimize the problem with a large number of algorithms, this is a very time-consuming task. Most optimization algorithms are parameterized and require the setting of parameters (i.e. the configuration of the algorithms) for each solved instance of the optimization problem separately. So, in addition to choosing the optimization algorithm itself, it is often necessary to determine the best values ​​of its hyper-parameters for the solved instance of the optimization problem. Finally, the computer simulation of real problems is often time-consuming. So, to determine the best optimization algorithm for a given problem instance, it is necessary to choose among many algorithms; these can have many hyper-parameters that need to be tuned using lengthy computer simulations. Since this is a challenging task, the user usually chooses one of the following options:

  • It uses an optimization algorithm with which it is familiar.
  • Use an optimization algorithm that has already been used for a similar problem.
  • It uses the currently “most popular” optimization algorithm.
  • It identifies the characteristics of the problem instance and chooses the best optimization algorithm based on this.
  • It compares a small subset of optimization algorithms on a small number of test problem instances and selects the most efficient one.

The choice of optimization algorithm facilitates the use of test functions to be optimized. These can be easily evaluated, well described with common characteristics (e.g. unimodality, multimodality, separability…) and we also know the optimal solutions for them. If the characteristics of the problem instance are known, the most promising optimization algorithm can be selected based on the results obtained on test functions that are similar in characteristics to the given problem instance.

Once the optimization algorithm is chosen, indirect tuning methods are usually used (eg, running the algorithm multiple times with different hyper-parameter values, in order to find the best settings). If the computer simulation of the solutions to the problem is not time-consuming, this setup can be performed efficiently. However, if this is not the case, such a setting is computationally inefficient. For optimization algorithms that use direct (on-the-fly) tuning, i.e. change the values ​​of their hyper-parameters during the optimization itself (according to the quality of the solutions found so far), this step is not necessary. However, it should be taken into account that directly setting the parameters is a demanding task in itself and may lead to an increase in the execution time of the optimization algorithm without guarantees that we will find a better solution.