Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimizing a 7-parameter black-box function #282

Closed
revanth-s opened this issue Nov 19, 2024 · 4 comments
Closed

Optimizing a 7-parameter black-box function #282

revanth-s opened this issue Nov 19, 2024 · 4 comments

Comments

@revanth-s
Copy link

Hi,

I am working on optimizing a 7-parameter black-box function and have some questions that I would really appreciate if you could help me with. I have listed them below:

  1. Parameter Scaling

    • My parameters have different upper and lower bounds, and initially, some of them were exponential. I converted the exponential parameters to linear by taking the logarithm and passed them to CMA-ES.
      log_parameter_bounds = [
          (-15, -10),
          (-17, -10),
          (2, 3.5),
          (0, 5),
          (1.5, 3),
          (-12, -6),
          (-13, -7),
      ]
    • Is it generally recommended to scale these parameters before optimization, for example, using StandardScaler()?
  2. Population Size

    • After browsing through the documentation and reading papers, I decided to use a population size (popsize) greater than the number of parameters, and hence I had chosen popsize = 8.
    • What should the ideal value of popsize be?
  3. Parallel Population Search

    • From my understanding, each popsize searches over a different area (independent from each other). Can each population search be performed in a separate thread in parallel using the multiprocessing python module?
  4. Relationship between popsize and sigma0

    • I have noticed that increasing popsize also requires me to lower sigma0 for my 7-parameter function to avoid crashing.
    • Is there a relationship between popsize and sigma0?
  5. Finding Multiple Local Minima

    • My 7-parameter function has multiple local minima, and I want to find as many of these minima as possible.
    • Therefore, I want to restart CMA every time the algorithm is stuck on a minimum, but I have been finding it hard to find the right termination conditions to use to stop and restart the algorithm.
    • I tried using tolupsigma, but it never terminated optimization except when set to 1. When set to 1, the algorithm was terminated within 2-3 iterations.
    • Currently, I am using tolstagnation = 2 and still facing the same problem.

Thank you in advance for your assistance.

@LazyLysistrata
Copy link

I'll answer a couple of your questions from my own experience because if I am mistaken the BBOBies can hopefully correct my mistakes and false assumptions!. :)

Scaling the parameter limits means that all parameters will, in a sense, be treated more equally by CMA-ES.
For example, you could rewrite your objective function so that all parameters lie between 0 and 1 inclusive.
(I prefer -5 to +5 for all parameters in my programs because I am more comfortable when the initial sigma is 1.0 or 2.0, instead of smaller values.)

One termination condition you should consider is a small minimum value for sigma, e.g. 10^-12.
If sigma is less than that, it is an indication that you have found a local minimum and should restart at some other (probably random location).

If sigma exceeds some large maximum value that you have chosen, that could be an indication that CMA-ES is producing a sequence of covariance matrices that are not converging, or are unstable in some sense. You should probably also restart in that case.

(I don't actually use pycma, and I am using use a type of matrix adaptation that doesn't need the covariance matrix, so I apologize if my advice and terminology is incorrect).

@nikohansen
Copy link
Contributor

nikohansen commented Nov 19, 2024

re. 1: maybe have a look at these practical hints
re. 2: if you don't know, use the default and increase if there is time left (IPOP-CMA-ES, so to speak).
re. 3: not sure about the correctness of the statement, but this may be related: #276
re. 4: maybe have a look again at the practical hints, and Fig.1 in this reference suggests that this also can very much depend on the function to be optimized
re. 5: what about the defaults? Otherwise, the 'tol*' options are the goto options to consider, see e.g. here

@revanth-s
Copy link
Author

Thank you for your reply.

Questions 1 to 4 are answered satisfactorily for me. However, I believe I can provide more information regarding question 5.

  1. When I utilized the default values for termination, the algorithm terminated with the condition noeffectaxis=None after approximately 1000 evaluations (125 iterations for a population size of 8). I am seeking a way for the algorithm to terminate much earlier. My objective is to run a maximum of 2400 evaluations.

@nikohansen
Copy link
Contributor

You may want to use a larger 'tolx' or 'tolfun' (tolx is in essence the effective sigma, as also suggested above). Less flexible, you can define the maximum evaluations or iterations or use the 'ftarget' to terminate when an f-value below ftarget is reached.

import cma

cma.CMAOptions('max')
{'maxfevals': 'inf  #v maximum number of function evaluations',
 'maxiter': '100 + 150 * (N+3)**2 [//](/~https://github.com/CMA-ES/pycma/issues/282) popsize**0.5  #v maximum number of iterations',
 'maxstd': 'None  #v maximal std (scalar or vector) in any coordinate direction',
 'maxstd_boundrange': '1/3  # maximal std relative to bound_range per coordinate, overruled by maxstd',
 'tolupsigma': '1e20  #v sigma/sigma0 > tolupsigma * max(eivenvals(C)**0.5) indicates "creeping behavior" with usually minor improvements',
 'verbose': '3  #v verbosity e.g. of initial/final message, -1 is very quiet, -9 maximally quiet, may not be fully implemented'}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants