How to efficiently select neighbour in 1-dimensional and n-dimensional space for Simulated Annealing
Asked Answered
V

2

3

I would like to use Simulated Annealing to find local minimum of single variable Polynomial function, within some predefined interval. I would also like to try and find Global minimum of Quadratic function.

Derivative-free algorithm such as this is not the best way to tackle the problem, so this is only for study purposes.

While the algorithm itself is pretty straight-forward, i am not sure how to efficiently select neighbor in single or n-dimensional space.

Lets say that i am looking for local minimum of function: 2*​x^​3+​x+​1 over interval [-0.5, 30], and assume that interval is reduced to tenths of each number, e.g {1.1, 1.2 ,1.3 , ..., 29.9, 30}.

What i would like to achieve is balance between random walk and speed of convergence from starting point to points with lower energy.

If i simply select random number form the given interval every time, then there is no random walk and the algorithm might circle around. If, on the contrary, next point is selected by simply adding or subtracting 0.1 with the equal probability, then the algorithm might turn into exhaustive search - based on the starting point.

How should i efficiently balance Simulated Annealing neighbor search in single dimensional and n-dimensional space ?

Varga answered 11/6, 2015 at 16:31 Comment(0)
D
3

So you are trying to find an n-dimensional point P' that is "randomly" near another n-dimensional point P; for example, at distance T. (Since this is simulated annealing, I assume that you will be decrementing T once in a while).

This could work:

double[] displacement(double t, int dimension, Random r) {
      double[] d = new double[dimension];
      for (int i=0; i<dimension; i++) d[i] = r.nextGaussian()*t;
      return d;
}

The output is randomly distributed in all directions and centred on the origin (notice that r.nextDouble() would favour 45º angles and be centred at 0.5). You can vary the displacement by increasing t as needed; 95% of results will be within 2*t of the origin.

EDIT:

To generate a displaced point near a given one, you could modify it as

double[] displaced(double t, double[] p, Random r) {
      double[] d = new double[p.length];
      for (int i=0; i<p.length; i++) d[i] = p[i] + r.nextGaussian()*t;
      return d;
}

You should use the same r for all calls (because if you create a new Random() for each you will keep getting the same displacements over and over).

Doorstep answered 11/6, 2015 at 18:12 Comment(6)
Please, a bit clarification. Is t = T? Should the output not be centered around current point? I don't see how the algorithm takes current point into calculation ? Or, should i simply apply this displacement on current point outside this method with it's return value ? Thank youVarga
See edit regarding "starting point" (p). Yes, t would be for temperature; variables are usually lower-case in Java.Doorstep
Thanks for extra tip with Random instance.Varga
Temperature and point distance should be same variable then ? This would naturally progress finer search as the temperature gets lowerVarga
Yes, that is the idea.Doorstep
Thank you for your help. Using normal distribution with temperature as distance is the best suggestion i found so far.Varga
P
1

In "Numerical Recepies in C++" there is a chapter titled "Continuous Minimization by Simulated Annealing". In it we have

A generator of random changes is inefficient if, when local downhill moves exist, it nevertheless almost always proposes an uphill move. A good generator, we think, should not become inefficient in narrow valleys; nor should it become more and more inefficient as convergence to a minimum is approached.

They then proceed to discuss a "downhill simplex method".

Parson answered 5/8, 2018 at 21:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.