In situations like these, you don't need to invoke the random number generator more than once. All you need is a table of cumulative probabilities:
double c[k] = // the probability that X <= k (k = 0,...)
Then generate a random number 0 <= r < 1
, and take the first integer X
such that c[X] > r
. You can find this X
with a binary search.
To generate this table, we need the individual probabilities
p[k] = lambda^k / (k! e^lambda) // // the probability that X = k
If lambda
is large, this becomes wildly inaccurate, as you have found. But we can use a trick here: start at (or near) the largest value, with k = floor[lambda]
, and pretend for the moment that p[k]
is equal to 1
. Then calculate p[i]
for i > k
using the recurrence relation
p[i+1] = (p[i]*lambda) / (i+1)
and for i < k
using
p[i-1] = (p[i]*i)/lambda
This ensures that the largest probabilities have the greatest possible precision.
Now just calculate c[i]
using c[i+1] = c[i] + p[i+1]
, up to the point where c[i+1]
is the same as c[i]
. Then you can normalise the array by dividing by this limiting value c[i]
; or you can leave the array as it is, and use a random number 0 <= r < c[i]
.
See: http://en.wikipedia.org/wiki/Inverse_transform_sampling