Can anyone provide some pseudo code for a roulette selection function? How would I implement this:
I don't really understand how to read this math notation. I never took any probability or statistics.
Can anyone provide some pseudo code for a roulette selection function? How would I implement this:
I don't really understand how to read this math notation. I never took any probability or statistics.
It's been a few years since i've done this myself, however the following pseudo code was found easily enough on google.
for all members of population sum += fitness of this individual end for for all members of population probability = sum of probabilities + (fitness / sum) sum of probabilities += probability end for loop until new population is full do this twice number = Random between 0 and 1 for all members of population if number > probability but less than next probability then you have been selected end for end create offspring end loop
The site where this came from can be found here if you need further details.
fitness = 1 / (x + y)
. –
Cule probability
could well be greater than 1
and will therefore never be selected.. number
should be number = (Random between 0 and 1) * sum of probabilities
shouldn't it? –
Feltie probability
serves as a prefix sum such that the last member of population has a probability
of 1
. –
Burr Lots of correct solutions already, but I think this code is clearer.
def select(fs):
p = random.uniform(0, sum(fs))
for i, f in enumerate(fs):
if p <= 0:
break
p -= f
return i
In addition, if you accumulate the fs, you can produce a more efficient solution.
cfs = [sum(fs[:i+1]) for i in xrange(len(fs))]
def select(cfs):
return bisect.bisect_left(cfs, random.uniform(0, cfs[-1]))
This is both faster and it's extremely concise code. STL in C++ has a similar bisection algorithm available if that's the language you're using.
The pseudocode posted contained some unclear elements, and it adds the complexity of generating offspring in stead of performing pure selection. Here is a simple python implementation of that pseudocode:
def roulette_select(population, fitnesses, num):
""" Roulette selection, implemented according to:
<https://mcmap.net/q/341567/-roulette-selection-in-genetic-algorithms
-selection-in-genetic-algorithms/177278#177278>
"""
total_fitness = float(sum(fitnesses))
rel_fitness = [f/total_fitness for f in fitnesses]
# Generate probability intervals for each individual
probs = [sum(rel_fitness[:i+1]) for i in range(len(rel_fitness))]
# Draw new population
new_population = []
for n in xrange(num):
r = rand()
for (i, individual) in enumerate(population):
if r <= probs[i]:
new_population.append(individual)
break
return new_population
This is called roulette-wheel selection via stochastic acceptance:
/// \param[in] f_max maximum fitness of the population
///
/// \return index of the selected individual
///
/// \note Assuming positive fitness. Greater is better.
unsigned rw_selection(double f_max)
{
for (;;)
{
// Select randomly one of the individuals
unsigned i(random_individual());
// The selection is accepted with probability fitness(i) / f_max
if (uniform_random_01() < fitness(i) / f_max)
return i;
}
}
The average number of attempts needed for a single selection is:
τ = fmax / avg(f)
τ doesn't depend explicitly on the number of individual in the population (N), but the ratio can change with N.
However in many application (where the fitness remains bounded and the average fitness doesn't diminish to 0 for increasing N) τ doesn't increase unboundedly with N and thus a typical complexity of this algorithm is O(1) (roulette wheel selection using search algorithms has O(N) or O(log N) complexity).
The probability distribution of this procedure is indeed the same as in the classical roulette-wheel selection.
For further details see:
Here is some code in C :
// Find the sum of fitnesses. The function fitness(i) should
//return the fitness value for member i**
float sumFitness = 0.0f;
for (int i=0; i < nmembers; i++)
sumFitness += fitness(i);
// Get a floating point number in the interval 0.0 ... sumFitness**
float randomNumber = (float(rand() % 10000) / 9999.0f) * sumFitness;
// Translate this number to the corresponding member**
int memberID=0;
float partialSum=0.0f;
while (randomNumber > partialSum)
{
partialSum += fitness(memberID);
memberID++;
}
**// We have just found the member of the population using the roulette algorithm**
**// It is stored in the "memberID" variable**
**// Repeat this procedure as many times to find random members of the population**
From the above answer, I got the following, which was clearer to me than the answer itself.
To give an example:
Random(sum) :: Random(12) Iterating through the population, we check the following: random < sum
Let us chose 7 as the random number.
Index | Fitness | Sum | 7 < Sum
0 | 2 | 2 | false
1 | 3 | 5 | false
2 | 1 | 6 | false
3 | 4 | 10 | true
4 | 2 | 12 | ...
Through this example, the most fit (Index 3) has the highest percentage of being chosen (33%); as the random number only has to land within 6->10, and it will be chosen.
for (unsigned int i=0;i<sets.size();i++) {
sum += sets[i].eval();
}
double rand = (((double)rand() / (double)RAND_MAX) * sum);
sum = 0;
for (unsigned int i=0;i<sets.size();i++) {
sum += sets[i].eval();
if (rand < sum) {
//breed i
break;
}
}
Prof. Thrun of Stanford AI lab also presented a fast(er?) re-sampling code in python during his CS373 of Udacity. Google search result led to the following link:
http://www.udacity-forums.com/cs373/questions/20194/fast-resampling-algorithm
Hope this helps
Here's a compact java implementation I wrote recently for roulette selection, hopefully of use.
public static gene rouletteSelection()
{
float totalScore = 0;
float runningScore = 0;
for (gene g : genes)
{
totalScore += g.score;
}
float rnd = (float) (Math.random() * totalScore);
for (gene g : genes)
{
if ( rnd>=runningScore &&
rnd<=runningScore+g.score)
{
return g;
}
runningScore+=g.score;
}
return null;
}
Roulette Wheel Selection in MatLab:
TotalFitness=sum(Fitness);
ProbSelection=zeros(PopLength,1);
CumProb=zeros(PopLength,1);
for i=1:PopLength
ProbSelection(i)=Fitness(i)/TotalFitness;
if i==1
CumProb(i)=ProbSelection(i);
else
CumProb(i)=CumProb(i-1)+ProbSelection(i);
end
end
SelectInd=rand(PopLength,1);
for i=1:PopLength
flag=0;
for j=1:PopLength
if(CumProb(j)<SelectInd(i) && CumProb(j+1)>=SelectInd(i))
SelectedPop(i,1:IndLength)=CurrentPop(j+1,1:IndLength);
flag=1;
break;
end
end
if(flag==0)
SelectedPop(i,1:IndLength)=CurrentPop(1,1:IndLength);
end
end
Okay, so there are 2 methods for roulette wheel selection implementation: Usual and Stochastic Acceptance one.
Usual algorithm:
# there will be some amount of repeating organisms here.
mating_pool = []
all_organisms_in_population.each do |organism|
organism.fitness.times { mating_pool.push(organism) }
end
# [very_fit_organism, very_fit_organism, very_fit_organism, not_so_fit_organism]
return mating_pool.sample #=> random, likely fit, parent!
Stochastic Acceptance algorithm:
max_fitness_in_population = all_organisms_in_population.sort_by(:fitness)[0]
loop do
random_parent = all_organisms_in_population.sample
probability = random_parent.fitness/max_fitness_in_population * 100
# if random_parent's fitness is 90%,
# it's very likely that rand(100) is smaller than it.
if rand(100) < probability
return random_parent #=> random, likely fit, parent!
else
next #=> or let's keep on searching for one.
end
end
You can choose either, they will be returning identical results.
http://natureofcode.com/book/chapter-9-the-evolution-of-code - a beginner-friendly and clear chapter on genetic algorithms. explains roulette wheel selection as a bucket of wooden letters (the more As you put in - the great is the chance of picking an A, Usual algorithm).
https://en.wikipedia.org/wiki/Fitness_proportionate_selection - describes Stochastic Acceptance algorithm.
Based on my research ,Here is another implementation in C# if there is a need for it:
//those with higher fitness get selected wit a large probability
//return-->individuals with highest fitness
private int RouletteSelection()
{
double randomFitness = m_random.NextDouble() * m_totalFitness;
int idx = -1;
int mid;
int first = 0;
int last = m_populationSize -1;
mid = (last - first)/2;
// ArrayList's BinarySearch is for exact values only
// so do this by hand.
while (idx == -1 && first <= last)
{
if (randomFitness < (double)m_fitnessTable[mid])
{
last = mid;
}
else if (randomFitness > (double)m_fitnessTable[mid])
{
first = mid;
}
mid = (first + last)/2;
// lies between i and i+1
if ((last - first) == 1)
idx = last;
}
return idx;
}
This Swift 4 array extension implements weighted random selection, a.k.a Roulette selection from its elements:
public extension Array where Element == Double {
/// Consider the elements as weight values and return a weighted random selection by index.
/// a.k.a Roulette wheel selection.
func weightedRandomIndex() -> Int {
var selected: Int = 0
var total: Double = self[0]
for i in 1..<self.count { // start at 1
total += self[i]
if( Double.random(in: 0...1) <= (self[i] / total)) { selected = i }
}
return selected
}
}
For example given the two element array:
[0.9, 0.1]
weightedRandomIndex()
will return zero 90% of the time and one 10% of the time.
Here is a more complete test:
let weights = [0.1, 0.7, 0.1, 0.1]
var results = [Int:Int]()
let n = 100000
for _ in 0..<n {
let index = weights.weightedRandomIndex()
results[index] = results[index, default:0] + 1
}
for (key,val) in results.sorted(by: { a,b in weights[a.key] < weights[b.key] }) {
print(weights[key], Double(val)/Double(n))
}
output:
0.1 0.09906
0.1 0.10126
0.1 0.09876
0.7 0.70092
This answer is basically the same as Andrew Mao's answer here: https://mcmap.net/q/341388/-roulette-wheel-selection-algorithm-duplicate
Here is the code in python. This code can also handle the negative value of fitness.
from numpy import min, sum, ptp, array
from numpy.random import uniform
list_fitness1 = array([-12, -45, 0, 72.1, -32.3])
list_fitness2 = array([0.5, 6.32, 988.2, 1.23])
def get_index_roulette_wheel_selection(list_fitness=None):
""" It can handle negative also. Make sure your list fitness is 1D-numpy array"""
scaled_fitness = (list_fitness - min(list_fitness)) / ptp(list_fitness)
minimized_fitness = 1.0 - scaled_fitness
total_sum = sum(minimized_fitness)
r = uniform(low=0, high=total_sum)
for idx, f in enumerate(minimized_fitness):
r = r + f
if r > total_sum:
return idx
get_index_roulette_wheel_selection(list_fitness1)
get_index_roulette_wheel_selection(list_fitness2)
I wrote a version in C# and am really looking for confirmation that it is indeed correct:
(roulette_selector is a random number which will be in the range 0.0 to 1.0)
private Individual Select_Roulette(double sum_fitness)
{
Individual ret = new Individual();
bool loop = true;
while (loop)
{
//this will give us a double within the range 0.0 to total fitness
double slice = roulette_selector.NextDouble() * sum_fitness;
double curFitness = 0.0;
foreach (Individual ind in _generation)
{
curFitness += ind.Fitness;
if (curFitness >= slice)
{
loop = false;
ret = ind;
break;
}
}
}
return ret;
}
© 2022 - 2024 — McMap. All rights reserved.