Algorithm to print out a shuffled list, in-place and with O(1) memory
Asked Answered
C

11

13

After reading this question I started to wonder: is it possible to have a shuffling algorithm which does not modify or copy the original list?

To make it clear:

Imagine you are given a list of objects. The list size can be arbitrary, but assume it's pretty large (say, 10,000,000 items). You need to print out the items of the list in random order, and you need to do it as fast as possible. However, you should not:

  • Copy the original list, because it's very large and copying would waste a LOT of memory (probably hitting the limits of available RAM);
  • Modify the original list, because it's sorted in some way and some other part later on depends on it being sorted.
  • Create an index list, because, again, the list is very large and copying takes all too much time and memory. (Clarification: this is meant any other list, which has the same number of elements as the original list).

Is this possible?

Added: More clarifications.

  1. I want the list to be shuffled in true random way with all permutations equally likely (of course, assuming we have a proper Rand() function to start with).
  2. Suggestions that I make a list of pointers, or a list of indices, or any other list that would have the same number of elements as the original list, is explicitly deemed as inefficient by the original question. You can create additional lists if you want, but they should be serious orders of magnitude smaller than the original list.
  3. The original list is like an array, and you can retrieve any item from it by its index in O(1). (So no doubly-linked list stuff, where you have to iterate through the list to get to your desired item.)

Added 2: OK, let's put it this way: You have a 1TB HDD filled with data items, each 512 bytes large (a single sector). You want to copy all this data to another 1TB HDD while shuffling all the items. You want to do this as fast as possible (single pass over data, etc). You have 512MB of RAM available, and don't count on swap. (This is a theoretical scenario, I don't have anything like this in practice. I just want to find the perfect algorithm.item.)

Comanchean answered 8/12, 2009 at 12:37 Comment(9)
I think that the answer to this may depend on the internal implementation of the list. Just to be clear, this would most likely be a doubly-linked list right?Earpiece
paxdiablo discusses Knuth's shuffle in this post #1859110 I think that's what you are after.Might
on a second thought, though it runs at O(n), you will need to create an int array of that length - But I think that's the best you can achieve - you somehow should keep track of selected items.Might
The answer is "no", but there's not enough room in this margin to explain why :-)Epidermis
Feel free to add a lengthy answer. :) I also think that the answer is "no", but I can't prove it, which is why I asked it on SO. :)Comanchean
@Vilx-, now you're just causing trouble :-) Go and buy a third 1TB drive and use that as a shuffle list or bit mask for the sectors :-)Epidermis
Oh, come on! I just want to find the Perfect Algorithm! :)Comanchean
If you are broke, you've got more important things to worry about than shuffling records. Go look for a job man :-)Thedathedric
Hahaha! :D +1 for that comment! :DComanchean
P
2

Here is a very simple proof that no PRNG scheme can work:

The PRNG idea has two phases: first, select a PRNG and its initial state; second, use the PRNG to shuffle the output. Well, there are N! possible permutations, so you need at least N! different possible start states, entering phase 2. This means that at the start of phase 2 you must have at least log2 N! bits of state, which isn't allowed.

However this does not rule out schemes where the algorithm receives new random bits from the environment as it goes. There might be, say, a PRNG that reads its initial state lazily and yet is guaranteed not to repeat. Can we prove there isn't?

Suppose we do have a perfect shuffling algorithm. Imagine we start running it, and when it's halfway done, we put the computer to sleep. Now the full state of the program has been saved somewhere. Let S be the set of all possible states the program could be in at this halfway mark.

Since the algorithm is correct and guaranteed to terminate, there is a function f which, given the saved program state plus any long enough string of bits, produces a valid sequence of disk reads and writes completing the shuffle. The computer itself implements this function. But consider it as a mathematical function:

f : (S × bits) → sequence of reads and writes

Then, trivially, there exists a function g which, given only the saved program state, produces the set of disk locations yet to be read and written. (Simply pass some arbitrary string of bits to f, then look at the results.)

g : Sset of locations to read and write

The remaining bit of the proof is to show that the domain of g contains at least NCN/2 different sets regardless of the choice of algorithm. If that's true, there must be at least that many elements of S, and so the state of the program must contain at least log2 NCN/2 bits at the halfway mark, in violation of the requirements.

I'm not sure how to prove that last bit, though, since either the set-of-locations-to-read or the set-of-locations-to-write can be low-entropy, depending on the algorithm. I suspect there's some obvious principle of information theory that can cut the knot. Marking this community wiki in the hopes someone will supply it.

Puck answered 8/12, 2009 at 12:37 Comment(9)
Wow, cool. Actually, I don't follow you through the second half, but it seems serious enough that I don't doubt you.Comanchean
It is true that if all permutations need to be as probable you can't use any standard, off-the-self PRNG because when N grows large, N! >> 2**K where K = size in bits of the PRNG's internal state. However, I think it's misguided to interpret "true random" from the original post in this strict fashion because it would make even a solution that could use an arbitrary amount of space very difficult. I thought the original question was more about space usage and "true random" meant "true pseudorandom" as would be standard in computer science.Petcock
Well, the asker said several times what he or she meant by "true random". And he or she check-marked this answer. So.Puck
@antti.huima: Also note that the second part of this answer tries to argue that the task as specified is impossible even if we take for granted some perfect source of randomness (i.e., "the algorithm receives new random bits from the environment as it goes").Puck
From a theoretical standpoint, I think a reasonable question might be: Suppose one is given a function which is assumed to take constant time, and which for any arbitrary pair of integers X and Y will yield a truly random integer equally distributed between 0 and (X-1) inclusive. What is the optimal time/space tradeoff curve to produce a perfect shuffle of N cards, if the maximum integer magnitude is a constant power of the number of cards? It could be done in O(1) space and O(N^2) time, by picking numbers from 0 to N-1, then 0 to N-2, etc. but increasing each number by the number of...Carney
...previous numbers that are less than or equal to it. It may be done in O(N) space and O(N) time by using existing shuffling methods. I would conjecture that O(N^0.5) space and O(N^(1.5)) time would probably not be too hard, but I wonder if there are any approaches that would be O(lg(N)) in space and O(NlgN) in time?Carney
I'm pretty sure this answer is missing the point of the question. Yes, no reasonable PRNG can make all possible outputs equally likely, but that also applies to the case where you're allowed to modify the input. It applies to most applications of a PRNG. We just don't care. While the question does say "with all permutations equally likely", it shouldn't. The real issues are making sure the output doesn't contain repetitions and making sure the output is sufficiently indistinguishable from random.Bonbon
@user2357112: Antti Huima posted a very similar comment back in 2009. I responded then. The question, after multiple updates, is excruciatingly clear about what is meant by “true random”. I will grant that maybe it’s a wrong question, but at the time, I was opposed to “maybe you meant this totally other question” answers.Puck
Do you need the complicated "sequences to read/write disk" stuff? Op's question asks to print out the numbers, so halfway through printing, if you break the RNG, you still may have any of n/2 numbers left to print, which is, as you correctly noted, at least C(n, n/2) possible behaviours which must follow from state.Yamada
P
13

Well it depends a bit on what kind of randomness you except for the shuffling, i.e. should all shufflings be as probable, or can the distribution be skewed.

There are mathematical ways to produce "random-looking" permutations of N integers, so if P is such a permutation from 0..N-1 to 0..N-1, you can just iterate x from 0 to N-1 and output list item L(P(x)) instead of L(x) and you have obtained a shuffling. Such permutations can be obtained e.g. using modular arithmetics. For example, if N is prime, P(x)=(x * k) mod N is a permutation for any 0 < k < N (but maps 0 to 0). Similary for a prime N, for example P(x)=(x^3) mod N should be a permutation (but maps 0 to 0 and 1 to 1). This solution can be easily expanded to non-prime N by selecting the least prime above N (call it M), permute upto M, and discard the permuted indices above N (similary below).

It should be noted that modular exponentiation is the basis for many cryptographic algorithms (e.g. RSA, Diffie-Hellman) and is considered a strongly pseudorandom operation by the experts in the field.

Another easy way (not requiring prime numbers) is first to expand the domain so that instead of N you consider M where M is the least power of two above N. So e.g. if N=12 you set M=16. Then you use bijective bit operations, e.g.

P(x) = ((x ^ 0xf) ^ (x << 2) + 3) & 0xf

Then when you output your list, you iterate x from 0 to M-1 and output L(P(x)) only if P(x) is actually < N.

A "true, unbiased random" solution can be constructed by fixing a cryptographically strong block cipher (e.g. AES) and a random key (k) and then iterating the sequence

AES(k, 0), AES(k, 1), ...

and outputting the corresponding item from the sequence iff AES(k,i) < N. This can be done in constant space (the internal memory required by the cipher) and is indistinguishable from a random permutation (due to the cryptographic properties of the cipher) but is obviously very slow. In the case of AES, you would need to iterate until i = 2^128.

Petcock answered 8/12, 2009 at 12:42 Comment(7)
Well, I'd like true unbiased random. I'm aware of the prime solution, but that's not really random.Comanchean
The problem with a strong PRNG is that you will get repeats. That requires a bitmap to prevent "picking" some element of the original list more than once.Thedathedric
Well, as I point out above, there are non-repeating sequences which as permutations are considered strongly random, e.g. modular exponentiation or modern block ciphers.Petcock
I think this answer is the closest you can get. Any truly random algorithm is going to need to keep track of which item has already been selected since it will, by definition, not be predictable. This means either modifying the original list or creating a separate data structure.Soma
What do you mean by "excepting randomness"?Mucronate
Love this answer. I wouldn't use these for anything cryptographic, but for games this idea will work a treat.Explicate
Correct me if I'm wrong, but I think the P(x)=(x^3) mod N is not a permutation. For instance, 1^3 mod 7 = 1 mod 7 = 1, but also 2^3 mod 7 = 8 mod 7 = 1. I think the answer was going for something like this, which is more complicated: #10055232Dao
W
4

You're not allowed to make a copy, modify it, or keep track of which elements you've visited? I'm gonna say it's not possible. Unless I'm misunderstanding your third criteria.

I take it to mean you're not allowed to say, make an array of 10,000,000 corresponding booleans, set to true when you've printed the corresponding element. And you're not allowed to make a list of the 10,000,000 indices, shuffle the list, and print out the elements in that order.

Wheedle answered 8/12, 2009 at 12:42 Comment(1)
Yes, you're understanding me correctly. Well, you can keep track of which items you have visited, if you figure out a way to do it without making another list the same size as he input. If you make a list of size something like log(N), then I'd be satisfied.Comanchean
P
2

Here is a very simple proof that no PRNG scheme can work:

The PRNG idea has two phases: first, select a PRNG and its initial state; second, use the PRNG to shuffle the output. Well, there are N! possible permutations, so you need at least N! different possible start states, entering phase 2. This means that at the start of phase 2 you must have at least log2 N! bits of state, which isn't allowed.

However this does not rule out schemes where the algorithm receives new random bits from the environment as it goes. There might be, say, a PRNG that reads its initial state lazily and yet is guaranteed not to repeat. Can we prove there isn't?

Suppose we do have a perfect shuffling algorithm. Imagine we start running it, and when it's halfway done, we put the computer to sleep. Now the full state of the program has been saved somewhere. Let S be the set of all possible states the program could be in at this halfway mark.

Since the algorithm is correct and guaranteed to terminate, there is a function f which, given the saved program state plus any long enough string of bits, produces a valid sequence of disk reads and writes completing the shuffle. The computer itself implements this function. But consider it as a mathematical function:

f : (S × bits) → sequence of reads and writes

Then, trivially, there exists a function g which, given only the saved program state, produces the set of disk locations yet to be read and written. (Simply pass some arbitrary string of bits to f, then look at the results.)

g : Sset of locations to read and write

The remaining bit of the proof is to show that the domain of g contains at least NCN/2 different sets regardless of the choice of algorithm. If that's true, there must be at least that many elements of S, and so the state of the program must contain at least log2 NCN/2 bits at the halfway mark, in violation of the requirements.

I'm not sure how to prove that last bit, though, since either the set-of-locations-to-read or the set-of-locations-to-write can be low-entropy, depending on the algorithm. I suspect there's some obvious principle of information theory that can cut the knot. Marking this community wiki in the hopes someone will supply it.

Puck answered 8/12, 2009 at 12:37 Comment(9)
Wow, cool. Actually, I don't follow you through the second half, but it seems serious enough that I don't doubt you.Comanchean
It is true that if all permutations need to be as probable you can't use any standard, off-the-self PRNG because when N grows large, N! >> 2**K where K = size in bits of the PRNG's internal state. However, I think it's misguided to interpret "true random" from the original post in this strict fashion because it would make even a solution that could use an arbitrary amount of space very difficult. I thought the original question was more about space usage and "true random" meant "true pseudorandom" as would be standard in computer science.Petcock
Well, the asker said several times what he or she meant by "true random". And he or she check-marked this answer. So.Puck
@antti.huima: Also note that the second part of this answer tries to argue that the task as specified is impossible even if we take for granted some perfect source of randomness (i.e., "the algorithm receives new random bits from the environment as it goes").Puck
From a theoretical standpoint, I think a reasonable question might be: Suppose one is given a function which is assumed to take constant time, and which for any arbitrary pair of integers X and Y will yield a truly random integer equally distributed between 0 and (X-1) inclusive. What is the optimal time/space tradeoff curve to produce a perfect shuffle of N cards, if the maximum integer magnitude is a constant power of the number of cards? It could be done in O(1) space and O(N^2) time, by picking numbers from 0 to N-1, then 0 to N-2, etc. but increasing each number by the number of...Carney
...previous numbers that are less than or equal to it. It may be done in O(N) space and O(N) time by using existing shuffling methods. I would conjecture that O(N^0.5) space and O(N^(1.5)) time would probably not be too hard, but I wonder if there are any approaches that would be O(lg(N)) in space and O(NlgN) in time?Carney
I'm pretty sure this answer is missing the point of the question. Yes, no reasonable PRNG can make all possible outputs equally likely, but that also applies to the case where you're allowed to modify the input. It applies to most applications of a PRNG. We just don't care. While the question does say "with all permutations equally likely", it shouldn't. The real issues are making sure the output doesn't contain repetitions and making sure the output is sufficiently indistinguishable from random.Bonbon
@user2357112: Antti Huima posted a very similar comment back in 2009. I responded then. The question, after multiple updates, is excruciatingly clear about what is meant by “true random”. I will grant that maybe it’s a wrong question, but at the time, I was opposed to “maybe you meant this totally other question” answers.Puck
Do you need the complicated "sequences to read/write disk" stuff? Op's question asks to print out the numbers, so halfway through printing, if you break the RNG, you still may have any of n/2 numbers left to print, which is, as you correctly noted, at least C(n, n/2) possible behaviours which must follow from state.Yamada
G
2

Those 10,000,000 items are only references (or pointers) to actual items, so your list will not be that large. Only ~40MB on 32-bit architecture for all references + size of internal variables of that list. In case when your items are smaller than reference size, you just copy whole list.

Gresham answered 8/12, 2009 at 12:42 Comment(0)
E
2

It's not possible to do this with a truly random number generator since you either have to:

  • remember which numbers have already been chosen and skip them (which requires an O(n) list of booleans and progressively worsening run-times as you skip more and more numbers); or
  • reduce the pool after each selection (which requires either modifications to the original list or a separate O(n) list to modify).

Neither of those are possibilities in your question so I'm going to have to say "no, you can't do it".

What I would tend to go for in this case is a bit mask of used values but not with skipping since, as mentioned, the run-times get worse as the used values accumulate.

A bit mask will be substantially better than the original list of 39Gb (10 million bits is only about 1.2M), many order of magnitude less as you requested even though it's still O(n).

In order to get around the run-time problem, only generate one random number each time and, if the relevant "used" bit is already set, scan forward through the bit mask until you find one that's not set.

That means you won't be hanging around, desperate for the random number generator to give you a number that hasn't been used yet. The run times will only ever get as bad as the time taken to scan 1.2M of data.

Of course this means that the specific number chosen at any time is skewed based on the numbers that have already been chosen but, since those numbers were random anyway, the skewing is random (and if the numbers weren't truly random to begin with, then the skewing won't matter).

And you could even alternate the search direction (scanning up or down) if you want a bit more variety.

Bottom line: I don't believe what you're asking for is doable but keep in mind I've been wrong before as my wife will attest to, quickly and frequently :-) But, as with all things, there's usually ways to get around such issues.

Epidermis answered 8/12, 2009 at 13:23 Comment(3)
Well, I do agree about your point about the truly random number generator. I was thinking something about a weird PRNG function which would generate a non-repeating list of integers that can serve as indices into the original list. Well, non-repeating only as large as the size of the array. After that they start to repeat, naturally, perhaps even in the same sequence.Comanchean
Of course, every PRNG needs a seed, some original entropy to base on. In case I have 1,000,000,000 items there are 1,000,000,000! possible permutations which is a LOT. I don't even know how much, but the PRNG seed would need to have at least that many variations, or the distribution would be seriously skewed. Would a binary number of such size be smaller than a list of 32-bit integer indices?Comanchean
If yes, then this seed could be generated by the truly-random number generator, and then the PRNG would take care of the rest.Comanchean
F
1

It sounds impossible.

But 10,000,000 64-bit pointers is only about 76MB.

Fovea answered 8/12, 2009 at 12:45 Comment(3)
OK, I upped the stakes a bit. :)Comanchean
@Vilx and you only stated that items weight 1TB, not that there are 1T of pointers. It all depends on number of your items, not size of all those items.Gresham
In my example above there are 1G of pointers, far too much for 512MB of RAM anyway.Comanchean
Y
1

A linear-feedback shift register can do pretty much what you want -- generate a list of numbers up to some limit, but in a (reasonably) random order. The patterns it produces are statistically similar to what you'd expect from try randomness, but it's not even close to cryptographically secure. The Berlekamp-Massey algorithm allows you to reverse engineer an equivalent LFSR based on an output sequence.

Given your requirement for a list of ~10,000,000 items, you'd want a 24-bit maximal-length LFSR, and simply discard outputs larger than the size of your list.

For what it's worth, an LFSR is generally quite fast compared to a typical linear congruential PRNG of the same period. In hardware, an LFSR is extremely simple, consisting of an N-bit register, and M 2-input XOR's (where M is the number of taps -- sometimes only a couple, and rarely more than a half dozen or so).

Yellowstone answered 8/12, 2009 at 18:53 Comment(1)
That's a really good idea too -- some game programmers will thank you for pointing this out.Explicate
Z
0

If there's enough space, you could store node's pointers in an array, create a bitmap and get random ints that point to the next chosen item. If already chosen (you store that in your bitmap), then get closest one (left or right, you can randomize that), until no items are left.

If there's no enough space, then you could do same without storing node's pointers, but time will suffer (that's the time-space tradeoff ☺).

Zales answered 8/12, 2009 at 12:46 Comment(1)
The OP already said they didn't want to have to create a same-sized array of indices, which an array of pointers is equivalent to. Then if they did allow that, I don't understand why you would need this vague 'bitmap of random ints' when, if you were allowed to have a 2nd container, you could simply use std::shuffle on that and then iterate it, which would be far simpler and (I presume) faster.Shadoof
P
0

You can create a pseudorandom, 'secure' permutation using a block cipher - see here. They key insight is that, given a block cipher of n bits length, you can use 'folding' to shorten it to m < n bits, then the trick antti.huima already mentioned to generate a smaller permutation from it without spending huge amounts of time discarding out-of-range values.

Polyphemus answered 8/12, 2009 at 16:52 Comment(0)
J
0

Essentially what you need is a random number generator that produces the numbers 0..n-1 exactly once each.

Here's a half-baked idea: You could do pretty well by picking a prime p slightly larger than n, then picking a random x between 1 and p-1 whose order in the multiplicative group mod p is p-1 (pick random xs and test which ones satisfy x^i != 1 for i < p-1, you will only need to test a few before you find one). Since x then generates the group, just compute x^i mod p for 1 <= i <= p-2 and that will give you p-2 distinct random(ish) numbers between 2 and p-1. Subtract 2 and throw out the ones >= n and that gives you a sequence of indexes to print.

This isn't terribly random, but you can use the same technique multiple times, taking the indexes above (+1) and using them as the exponents of another generator x2 modulo another prime p2 (you'll need n < p2 < p), and so on. A dozen repetitions should make things pretty random.

Jaunty answered 10/12, 2009 at 22:36 Comment(0)
N
0

My solution depends on the mathematical properties of some cleverly calculated numbers

range = array size
prime = closestPrimeAfter(range)
root = closestPrimitiveRootTo(range/2)
state = root

With this setup we can calculate the following repeatedly and it will iterate all elements of the array exactly once in a seemingly random order, after which it will loop to traverse the array in the same exact order again.

state = (state * root) % prime

I implemented and tested this in Java, so I decided to paste my code here for future reference.

import java.math.BigInteger;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.Random;

public class PseudoRandomSequence {

    private long            state;
    private final long  range;
    private final long  root;
    private final long  prime;
    //Debugging counter
    private int             dropped = 0;

    public PseudoRandomSequence(int r) {
        range = r;
        prime = closestPrimeAfter(range);
        root = modPow(generator(prime), closestPrimeTo(prime / 2), prime);
        reset();
        System.out.println("-- r:" + range);
        System.out.println("   p:" + prime);
        System.out.println("   k:" + root);
        System.out.println("   s:" + state);
    }

    // https://en.wikipedia.org/wiki/Primitive_root_modulo_n
    private static long modPow(long base, long exp, long mod) {
        return BigInteger.valueOf(base).modPow(BigInteger.valueOf(exp), BigInteger.valueOf(mod)).intValue();
    }

    //http://e-maxx-eng.github.io/algebra/primitive-root.html
    private static long generator(long p) {
        ArrayList<Long> fact = new ArrayList<Long>();
        long phi = p - 1, n = phi;
        for (long i = 2; i * i <= n; ++i) {
            if (n % i == 0) {
                fact.add(i);
                while (n % i == 0) {
                    n /= i;
                }
            }
        }
        if (n > 1) fact.add(n);
        for (long res = 2; res <= p; ++res) {
            boolean ok = true;
            for (long i = 0; i < fact.size() && ok; ++i) {
                ok &= modPow(res, phi / fact.get((int) i), p) != 1;
            }
            if (ok) {
                return res;
            }
        }
        return -1;
    }

    public long get() {
        return state - 1;
    }

    public void advance() {
        //This loop simply skips all results that overshoot the range, which should never happen if range is a prime number.
        dropped--;
        do {
            state = (state * root) % prime;
            dropped++;
        } while (state > range);
    }

    public void reset() {
        state = root;
        dropped = 0;
    }

    private static boolean isPrime(long num) {
        if (num == 2) return true;
        if (num % 2 == 0) return false;
        for (int i = 3; i * i <= num; i += 2) {
            if (num % i == 0) return false;
        }
        return true;
    }

    private static long closestPrimeAfter(long n) {
        long up;
        for (up = n + 1; !isPrime(up); ++up)
            ;
        return up;
    }

    private static long closestPrimeBefore(long n) {
        long dn;
        for (dn = n - 1; !isPrime(dn); --dn)
            ;
        return dn;
    }

    private static long closestPrimeTo(long n) {
        final long dn = closestPrimeBefore(n);
        final long up = closestPrimeAfter(n);
        return (n - dn) > (up - n) ? up : dn;
    }

    private static boolean test(int r, int loops) {
        final int array[] = new int[r];
        Arrays.fill(array, 0);
        System.out.println("TESTING: array size: " + r + ", loops: " + loops + "\n");
        PseudoRandomSequence prs = new PseudoRandomSequence(r);
        final long ct = loops * r;
        //Iterate the array 'loops' times, incrementing the value for each cell for every visit. 
        for (int i = 0; i < ct; ++i) {
            prs.advance();
            final long index = prs.get();
            array[(int) index]++;
        }
        //Verify that each cell was visited exactly 'loops' times, confirming the validity of the sequence
        for (int i = 0; i < r; ++i) {
            final int c = array[i];
            if (loops != c) {
                System.err.println("ERROR: array element @" + i + " was " + c + " instead of " + loops + " as expected\n");
                return false;
            }
        }
        //TODO: Verify the "randomness" of the sequence
        System.out.println("OK:  Sequence checked out with " + prs.dropped + " drops (" + prs.dropped / loops + " per loop vs. diff " + (prs.prime - r) + ") \n");
        return true;
    }

    //Run lots of random tests
    public static void main(String[] args) {
        Random r = new Random();
        r.setSeed(1337);
        for (int i = 0; i < 100; ++i) {
            PseudoRandomSequence.test(r.nextInt(1000000) + 1, r.nextInt(9) + 1);
        }
    }

}

This was inspired by a small C implementation of a 2D graphics "dissolve" effect as described in Graphics Gems vol. 1 which in turn is an adaption to 2D with some optimizations of a mechanism called "LFSR" (wikipedia article here, original dissolve.c source code here).

Nephelometer answered 21/6, 2021 at 17:29 Comment(1)
I think this fails on the "I want the list to be shuffled in true random way with all permutations equally likely" criteria. But it might be good enough for many practical purposes.Comanchean

© 2022 - 2024 — McMap. All rights reserved.