I have a (potentially large) list data
of 3-tuples of small non-negative integers, like
data = [
(1, 0, 5),
(2, 4, 2),
(3, 2, 1),
(4, 3, 4),
(3, 3, 1),
(1, 2, 2),
(4, 0, 3),
(0, 3, 5),
(1, 5, 1),
(1, 5, 2),
]
I want to order the tuples within data
so that neighboring tuples (data[i]
and data[i+1]
) are "as similar as possible".
Define the dissimilarity of two 3-tuples as the number of elements which are unequal between them. E.g.
(0, 1, 2)
vs.(0, 1, 2)
: Dissimilarity0
.(0, 1, 2)
vs.(0, 1, 3)
: Dissimilarity1
.(0, 1, 2)
vs.(0, 2, 1)
: Dissimilarity2
.(0, 1, 2)
vs.(3, 4, 5)
: Dissimilarity3
.(0, 1, 2)
vs.(2, 0, 1)
: Dissimilarity3
.
Question: What is a good algorithm for finding the ordering of data
which minimizes the sum of dissimilarities between all neighboring 3-tuples?
Some code
Here's a function which computes the dissimilarity between two 3-tuples:
def dissimilar(t1, t2):
return sum(int(a != b) for a, b in zip(t1, t2))
Here's a function which computes the summed total dissimilarity of data
, i.e. the number which I seek to minimize:
def score(data):
return sum(dissimilar(t1, t2) for t1, t2 in zip(data, data[1:]))
The problem can be solved by simply running score()
over every permutation of data
:
import itertools
n_min = 3*len(data) # some large number
for perm in itertools.permutations(data):
n = score(perm)
if n < n_min:
n_min = n
data_sorted = list(perm)
print(data_sorted, n_min)
Though the above works, it's very slow as we explicitly check each and every permutation (resulting in O(N!) complexity). On my machine the above takes about 20 seconds when data
has 10 elements.
For completeness, here's the result of running the above given the example data
:
data_sorted = [
(1, 0, 5),
(4, 0, 3),
(4, 3, 4),
(0, 3, 5),
(3, 3, 1),
(3, 2, 1),
(1, 5, 1),
(1, 5, 2),
(1, 2, 2),
(2, 4, 2),
]
with n_min = 15
. Note that several other orderings (10
in total) with a score of 15
exist. For my purposes these are all equivalent and I just want one of them.
Final remarks
In practice the size of data
may be as large as say 10000
.
The sought-after algorithm should beat O(N!), i.e. probably be polynomial in time (and space).
If no such algorithm exists, I would be interested in "near-solutions", i.e. a fast algorithm which gives an ordering of data
with a small but not necessarily minimal total score. One such algorithm would be lexicographic sorting, i.e.
sorted(data) # score 18
though I hope to be able to do better than this.
Edit (comments on accepted solution)
I have tried all of the below heuristic solutions given as code (I have not tried e.g. Google OR-tools). For large len(data)
, I find that the solution of Andrej Kesely is both quick and gives the best results.
The idea behind this method is quite simple. The sorted list of data elements (3-tuples) is built up one by one. Given some data element, the next element is chosen to be the most similar one out of the remaining (not yet part of the sorted) data.
Essentially this solves a localized version of the problem where we only "look one ahead", rather than optimizing globally over the entire data set. We can imagine a hierarchy of algorithms looking n
ahead, each successively delivering better (or at least as good) results but at the cost of being much more expensive. The solution of Andrej Kesely then sits lowest in this hierarchy. The algorithm at the highest spot, looking len(data)
ahead, solves the problem exactly.
Let's settle for "looking 1 ahead", i.e. the answer by Andrej Kesely. This leaves room for a) the choice of initial element, b) what to do when several elements are equally good candidates (same dissimilarity) for use as the next one. Choosing the first element in data
as the initial element and the first occurrence of an element with minimal dissimilarity, both a) and b) are determined from the original order of elements within data
. As Andrej Kesely points out, it then helps to (lex)sort data
in advance.
In the end I went with this solution, but refined in a few ways:
- I try out the algorithm for 6 initial sortings of
data
; lex sort for columns(0, 1, 2)
,(2, 0, 1)
,(1, 2, 0)
, all in ascending as well as descending order. - For large
len(data)
, the algorithm becomes too slow for me. I suspect it scales likeO(n²)
. I thus process chunks of the data of sizen_max
independently, with the final result being the different sorted chunks concatenated. Transitioning from one chunk to the next we expect a dissimilarity of 3, but this is unimportant if we keepn_max
large. I go withn_max = 1000
.
As an implementation note, the performance can be improved by not using data.pop(idx)
as this itself is O(n)
. Instead, either leave the original data
as is and use another data structure for keeping track of which elements/indices have been used, or replace data[idx]
with some marker value upon use.
import random; f = lambda n: random.randint(0, n); n = 1000; data = [(f(n//30), f(n//20), f(n//10)) for _ in range(n)]
. – Moesum((a != x) + (b != y) + (c != z) for (a, b, c), (x, y, z) in zip(data, data[1:]))
made it ~4 times faster for me. (Just saying since I'd rather wait 9 seconds than 36 for a test to complete.) – Rominecollections.Counter(data)
is O(n), but it doesn't help much, as non-unique tuples are rare (indeed all tuples in the exampledata
are unique). Identical tuples should certainly follow each other in the optimal ordering, so we may count them up and solve the problem for the unique case, then add in the counting information in the end. Still, the core problem remains. Or am I missing some insight? – Moen = length(data)
, i.e. no fixed upper value. So even in the n → ∞ limit one can expect the majority of the tuples to be unique. – Moe