Count all unique quadruples that sum to a given value - is N^3 complexity algorithm known?
Asked Answered
Q

3

8

I am supposed to solve this problem in as low time complexity as possible, but let me be more specific.

You are given a sorted array of integers that contains duplicates.

Unique quadruple is a set of four indexes. Elements from the array under those indexes have to sum to a given value X. For example:

  1. Given an array [10, 20, 30, 40] and X = 100, there is only one quadruple: (0, 1, 2, 3).

  2. Given an array [0, 0, 0, 0, 0] and X = 0, there are 5 quadruples: (0, 1, 2, 3), (0, 1, 2, 4), (0, 1, 3, 4), (0, 2, 3, 4), (1, 2, 3, 4).

On the Internet there are plenty of N^3 solutions, but those are for unique quadruples in terms on values, not indexes. In those solutions, example number 1 would still give only one quadruple: (10, 20, 30, 40), but example number 2 gives only one quadruple (0, 0, 0, 0), not five of them.

I couldn't find a O(N^3) solution that would solve my problem instead of the other one. I can easily write a program that solves it in O(N^3logN) time. I also heard that the lower complexity bound for this problem is allegedly not known. Is there a O(N^3) solution known though?

Solutions known to me:

  1. Obvious naive approach O(N^4):

    int solution(int arr[], int arrSize, int X){
        int counter = 0;
        for(int i=0; i<arrSize-3; ++i)
            for(int j=i+1; j<arrSize-2; ++j)
                for(int k=j+1; k<arrSize-1; ++k)
                    for(int l=k+1; l<arrSize; ++l)
                        if(arr[i] + arr[j] + arr[k] + arr[l] == X)
                            ++counter;
        return counter;
    }
    
  2. Approach using triplets and binary search O(N^3logN):

    int solution(int arr[], int arrSize, int X){
        int counter = 0;
        for(int i=0; i<arrSize-3; ++i)
            for(int j=i+1; j<arrSize-2; ++j)
                for(int k=j+1; k<arrSize-1; ++k){
                    int subX = X - arr[i] - arr[j] - arr[k];
                    int first = binFirst(subX, arr, k+1, arrSize);
                    // Binary search that returns the position of the first
                    // occurrence of subX in arr in range [k+1, arrSize)
                    // or -1 if not found
                    int last = binLast(subX, arr, k+1, arrSize);
                    // Binary search that returns the position of the last
                    // occurrence of subX in arr in range [k+1, arrSize)
                    // or -1 if not found
                    if(first != -1)
                        counter += last - first + 1;
        return counter;
    

Naturally, the above algorithm could be improved by counting all duplicates of arr[i], arr[j], arr[k], but as far as I can tell, it does not lower the actual O(N^3logN) complexity.

Quicken answered 30/10, 2022 at 14:26 Comment(24)
Given an array of N zeroes, and a target of X = 0, the number of quadruples is C(N, 4) = n(n-1)(n-2)(n-3)/24, which is O(N^4). I find it impressive that you can produce an algorithm that runs faster than the size of the output.Vocalism
@RaymondChen since it is not important to print the quadruples, just to count them, as you can see, you solved the problem in O(1) time ( you actually counted them). General solution I can do is: get all triplets (~N^3) and use binary search to find lower and upper bound in the rest of the array to find amount of matching elements for each triplet, hence it's N^3logN. I can post the code if you want. Or maybe I should just post it - would it make the question more interesting?Quicken
@Smoksul Can we use memory to solve that problem? I am thinking to have a HashMap of pairs with their frequency that will cost O(n^2) space complexity with O(n^2) time complexity. Then it looks like the fact array is sorted is not being usedSatisfactory
@NavpreetDevpuri yes we can, but I would like you to elaborate how would that workQuicken
@Smoksul I have added references to the article with full explanationsSatisfactory
Just calculate all sums of two numbers and store them (the sum as comparison value and the two constituent indices as additional stored info) in a hashtable and a list. Loop over all pairs in the list and calculate X - current pair sum. The result should be the sum of the other two numbers. Test, whether those are in the hashtable. Average complexity is only slightly above O(n²). The hashtable should be able to store multiple entries (possible pairs) for each number (sum). Otherwise create a short linked list for each entry.Arbitrary
@Arbitrary I don't think it's going to be O(n^2). For X=0 and a majority of pairs summing to 0, list for sum of 0 will be long. Also you cannot just take the amount of tuples under the sum from hashtable as the amount of quadruples - you have to check if indexes are unique and somehow also if such a combination wasn't counted already.Quicken
Oh you said average. Well I am looking for best worst-case solution, not average one.Quicken
@Smoksul For best worst case, you cannot assume a hashmap to be O(1). So one could assume a tree with O(log n) create and search time (per element). This would increase overall worst time to O(n² log n) for counting. The same elements would be stored with a counter. If you want to list the results, you would be stuck with O(n^4), as there can be as many results (all zero e.g.). But in your examples, you also just return the counter.Arbitrary
If you have enough preinitialized (0) memory, you would get away with O(n²). The amount of memory you would need is O(max²-min²) to have an array with every possible value. You need special handling of multiple same values.Arbitrary
That special handling keeps the complexity at O(n²). You would store multiple elements (and the pairs built with them) only once with counter. In the for loop you would consider only sums of the first pair of <X/2. And you would special handle the case of =X/2.Arbitrary
@Arbitrary let's take the case of X=0 and the array containing numbers: [-5, -1, 2, 4]. Let's give them indexes [0, 1, 2, 3]. How do you make sure that you will not count both pairs of pairs (0, 1), (2, 3) and (0, 2), (1, 3)?Quicken
We create all sums of pairs: -6, -3, -1, 1, 3, 6. We loop over the ones <X/2=0 and combine with the one >X/2=0 so that the sum is X=0. You say (with extension to the 3rd case) that all three solutions are identical. That is true. We can assume that we always get all the combinations and divide the count by 3.Arbitrary
We can special-case four identical, three identical and a 4th different, 2 identical and a different pair and two pairs of identical values. All those special cases could be calculated (as) fast. Then we also special case the first and second pair having a sum of exactly X/2 and the combination of the exact sum of X/2 combined with identical elements.Arbitrary
E.g. [-4 -2 2 4]. Sums -6, -2, 0, 0, 2, 6. 2 classical combinations of -6+6 and -2+2. Special case two pairs of sum X/2=0. 1 special pairwise combination (without considering order of 0+0 or 0+0) possible. => 3. 3/3=1.Arbitrary
@Arbitrary what about [-4, 0, 2, 4, 8]? You will get a pair (0,1) with sum -4+0 = -4. You will also get a pair (0, 4) with sum -4+8=4. From what I understand your solution would count this case, even though it's not valid (0,1,0,4).Quicken
That is the case, if X=2*A+B+C. Another special case (shame on me), which we can filter out fast. Loop over all elements A, check for existence of a pair B+C with B+C=2*X-2*A. Recreate all pairs with A and check, whether they would originally be counted. Subtract that from the counter.Arbitrary
So basically you will want to remove all the excess quadruples (a, a, b, c) that were counted. You can find them by forming triplets (a, b, c) and just doubling the value of a. Worst case scenario you will have to subtract (almost) all of those triplets which makes it O(N^3) solution, which is still nice. On the other hand, maybe if you applied another hashing method to the triplets, you'd be able to remove them faster.Quicken
It is faster: For each element (factor N), you look for a fitting (precalculated) pair (or more exactly the number of pairs existing). The lookup has a factor of 1 (with large zeroed memory) or log N² (with a tree or binary search in the pairs). So this correction step alone will take between O(N) and O(N * log N²) overall, whereas creating the pairs (as also needed for the main step) takes between O(N²) (large zeroed memory) and O(N² * log N²) (sorted tree or sorting the flat-listed pairs before binary search). We do not need to identify each triplet, we just need to know, how many there are.Arbitrary
Is this online for testing somewhere?Galengalena
@Arbitrary yeah that seems right, but also then you could face a quadruplet of (a,a,a,b) and would remove it from the count even though it wasn't counted in the first place. So you'd need to add all such quadruplets, but then you'd probably add quadruples of (a,a,a,a) and then you need to remove them. Seems still O(N^2) though!Quicken
@KellyBundy what do you mean by online testing?Quicken
@Smoksul I mean is this from something like LeetCode or Codewars, so that we can submit potential solutions there and the site tests them?Galengalena
Well, sadly don't. I just took your solution, ran some tests (not extensive ones though) and got the same results as with brute-force method.Quicken
G
9

O(n²) in Python, inspired by גלעד ברקן's answer:

from itertools import combinations
from collections import Counter

def solution(arr, X):
    cd = Counter(map(sum, combinations(arr, 2)))
    count = 0
    for i, b in enumerate(arr):
        for d in arr[i+1:]:
            cd[b+d] -= 1
        for a in arr[:i]:
            count += cd[X - (a+b)]
    return count

Call the quadruples (a,b,c,d). We focus on the second element, b. For each possible b, we add each possible a (elements left of b), and look up how many pairs (c,d) (elements right of b) complete the sum a+b+c+d = X, i.e., sum to X - (a+b). For that lookup, we have a hash map cd that maps sums of pairs to counts of pairs. Initially, that's all pairs of the whole arr, but for each b we consider, remove its contributions to the map.

C++ version, where a/b/c/d are indexes instead of elements:

int solution(int arr[], int n, int X){
  std::unordered_map<int, int> cd;
  for (int c=0; c<n; c++)
    for (int d=c+1; d<n; d++)
      cd[arr[c]+arr[d]]++;
  int count = 0;
  for (int b=0; b<n; b++) {
    for (int d=b+1; d<n; d++)
      cd[arr[b]+arr[d]]--;
    for (int a=0; a<b; a++)
      count += cd[X - (arr[a]+arr[b])];
  }
  return count;
}

Python code with testing (Try it online!):

from itertools import combinations
from collections import Counter

def solution(arr, X):
    cd = Counter(map(sum, combinations(arr, 2)))
    count = 0
    for i, b in enumerate(arr):
        for d in arr[i+1:]:
            cd[b+d] -= 1
        for a in arr[:i]:
            count += cd[X - (a+b)]
    return count

import random
from operator import countOf

def naive(arr, X):
    sums = map(sum, combinations(arr, 4))
    return countOf(sums, X)

arr = random.choices(range(100), k=100)
print(naive(arr, 200))
print(solution(arr, 200))

C++ code with testing.

Galengalena answered 31/10, 2022 at 19:56 Comment(0)
S
6

Detailed explanation of how to come to the best solution step by step

Let's invent the solution.

Now, if we create pairs that contain sums of pairs, for example,

arr = [10, 20, 30, 40]
pairs = [10+20, 10+30, 10+40, 20+30, 20+40, 30+40]

There is a pattern. We have three pairs for 10+x, two pairs for 20+x, one pair for 30+x, and zero pairs for 40+x.

 [10+20, 10+30, 10+40, 20+30, 20+40, 30+40]
# -------------------  ------------  -----

 [30, 40, 50, 50, 60, 70]
# ----------  ------  --

So, the total pairs are

3 + 2 + 1
= sum of first (n-1) natural numbers
= (n - 1) * (n - 1 + 1) / 2
= (n - 1) * n / 2
= (n^2 - n) / 2

It looks like the whole pairs array will be sorted, but it is not true. Those sub-arrays in pairs should be sorted because the initial arr is sorted. For example,

arr = [10, 20, 30, 90]
pairs = [10+20, 10+30, 10+90, 20+30, 20+90, 30+90]

# Those sub-arrays are sorted
 [30, 40, 100, 50, 110, 120]
# -----------  -------  ---

Now, let’s write the pairs with origin arr indices

pairs = [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]

(0, 1) and (0, 2) are not valid quadruples, because we are having 0 in both pairs. So, how can we logically find valid pairs?

We only have one valid pair for (0, 1) which is (2, 3) which does not have 0 or 1

 [(0, 1), (0, 2), (0, 3), (1, 2), (1, 3), (2, 3)]
#  x  x    x       x       x       x       ----

One fact is, we can always write quadruple in such a way that one pair is next to the other pair. For example,

x = 100
arr = [10, 20, 30, 40]
pairs = [30, 40, 50, 50, 60, 70]

 [10, 20, 30, 40]
# --  ------  --
quadruple = (10 + 40) + (20 + 30)

# Which can we rewrite as
 [10, 20, 30, 40]
# ------  ------
quadruple = (10 + 20) + (30 + 40) = 30 + 70

# Which is as follows
pairs = [30, 40, 50, 50, 60, 70]
#        --                  --

So, we can do as follows to solve the problem

for pair0 in pairs:
    valid_pairs_for_pair0 = # Somehow get the valid pairs
    for pair1 in valid_pairs_for_pair0:
        if pair0 + pair1 == x:
            ans += 1

But the above solution is O(n^4), because pairs is of length (n^2 - n) / 2.

We can do better as we know those sub-arrays in the pairs are sorted.

arr = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] # n = 10
pairs = [
  (0,1),(0,2),(0,3),(0,4),(0,5),(0,6),(0,7),(0,8),(0,9),# (0,x) -> 9 pairs -> 10 - 0 - 1
  (1,2),(1,3),(1,4),(1,5),(1,6),(1,7),(1,8),(1,9),# (1,x) -> 8 pairs -> 10 - 1 - 1
  (2,3),(2,4),(2,5),(2,6),(2,7),(2,8),(2,9),# (2,x) -> 7 pairs -> 10 - 2 - 1
  (3,4),(3,5),(3,6),(3,7),(3,8),(3,9),# (3,x) -> 6 pairs -> 10 - 3 - 1
  (4,5),(4,6),(4,7),(4,8),(4,9),# (4,x) -> 5 pairs -> 10 - 4 - 1
  (5,6),(5,7),(5,8),(5,9),# (5,x) -> 4 pairs -> 10 - 5 - 1
  (6,7),(6,8),(6,9),# (6,x) -> 3 pairs -> 10 - 6 - 1
  (7,8),(7,9),# (7,x) -> 2 pairs -> 10 - 7 - 1
  (8,9),# (8,x) -> 1 pair -> 10 - 8 - 1
]

# We need to find the first valid pair and all of the pairs after that will be valid.

first valid pair index for (0, 1) => first (2,x) pair => (2,3) => pairs[9 + 8]
first valid pair index for (0, 2) => first (3,x) pair => (3,4) => pairs[9 + 8 + 7]
first valid pair index for (0, 3) => first (4,x) pair => (4,5) => pairs[9 + 8 + 7 + 6]

# There is a pattern
pairs[9 + 8] => pairs[sum(9 to 1) - sum(7 to 1)]
pairs[9 + 8 + 7] => pairs[sum(9 to 1) - sum(6 to 1)]
pairs[9 + 8 + 7 + 6] => pairs[sum(9 to 1) - sum(5 to 1)]

# That’s how we get started and for binary search
start = firstNSum(n - 1) - firstNSum(n - i1 - 2)
end = start + n - (i1 + 1) - 1 # n - (i1 + 1) - 1 is the number of pairs for (i1,x) pairs

Now, we can solve the problem as follows.

# For pair0 in pairs:
    # Binary search for all valid sub-arrays of pairs for pair0

Solution 1: Binary search

Time complexity: O(n^3.log(n)) log(n) + log(n-1) ... log(1) = log(n!) = n.log(n)

Space complexity: O(n^2)

def firstNSum(n):
    return n * (n + 1) // 2

def binary_search(pairs, x, start, end):
    while start < end:
        mid = (start + end) // 2
        if pairs[mid][1] < x:
            start = mid + 1
        else:
            end = mid
    return start


def count_four_pairs_with_sum(arr, x):
    n = len(arr)

    ans = 0

    pairs = []

    for i0 in range(n - 1):
        for i1 in range(i0 + 1, n):
            curr_sum = arr[i0] + arr[i1]
            pairs.append([(i0, i1), curr_sum])

    for [(i0, i1), curr_sum] in pairs:

        start = firstNSum(n - 1) - firstNSum(n - i1 - 2)
        end = start + n - (i1 + 1) - 1

        while start < len(pairs):
            x_start = binary_search(pairs, x - curr_sum, start, end)
            x_end = binary_search(pairs, x - curr_sum + 1, start, end)

            ans += x_end - x_start

            i1 += 1
            start += n - i1 - 1
            end = start + n - (i1 + 1) - 1

    return ans

arr = [10, 20, 30, 40]
x = 100
print(count_four_pairs_with_sum(arr, x))

We can do better. If we store the number of pairs with sum alongside with that also storing how many pairs are from each (i, x) pair group from pairs

# Loop for i0
    # Loop for i1
        # ans += valid pairs for i0 and i1, which is sum of i1 to n excluding i0 to i1

Solution 2: Using hashmap

Time complexity: O(n^3)

Space complexity: O(n^3)

from collections import defaultdict

def count_four_pairs_with_sum(arr, x):
    n = len(arr)

    ans = 0

    sum_freq = defaultdict(lambda: defaultdict(int))

    for i0 in range(n - 1):
        for i1 in range(i0 + 1, n):
            curr_sum = arr[i0] + arr[i1]
            sum_freq[curr_sum][i0] += 1

    for i0 in range(n - 1):
        for i1 in range(i0 + 1, n):
            curr_sum = arr[i0] + arr[i1]
            needed_sum = x - curr_sum
            valid_needed_sum_count = sum([sum_freq[needed_sum][i] for i in range(i1+1, n)])
            ans += valid_needed_sum_count

    return ans


arr = [0, 0, 0, 0, 0]
x = 0
print(count_four_pairs_with_sum(arr, x))

We can do better (as this answer showed) if we have frequencies of all possible pairs, and we look for all valid pair1 for each pair0.

Let a + b + c + d = x

a can be any number on the left of b

c and d can be any pair right of b

Because we know, we can rewrite any quadruple in such a way that a < b < c < d, for example,

 [0, 1, 2, 3, 4, 5, ...., n-1, n]
#       a     b            c   d

So, for any b we only need to count the valid (c,d) to the right of it, which means we don't need to consider any pair containing any number that is left to b for example (c,d)=(2,5) is invalid if b=4 because 2 is left of 4

Now, we can solve that as follows.

# For every b
  # Remove all pairs for b
  # For every valid a, a < b
    # ans += number of valid pairs in remaining pairs

The first loop for b will keep removing pairs for current b that means when b=4 we already removed all pairs from previous values of b=1,2,3

Final solution: Using hashmap

Time complexity: O(n^2)

Space complexity: O(n^2)

from collections import defaultdict

def count_four_pairs_with_sum(arr, x):
    n = len(arr)

    sum_freq = defaultdict(int)

    for i0 in range(n - 1):
        for i1 in range(i0 + 1, n):
            curr_sum = arr[i0] + arr[i1]
            sum_freq[curr_sum] += 1

    ans = 0
    for i, b in enumerate(arr):

        for j in arr[i+1:]:
            sum_freq[b+j] -= 1

        for a in arr[:i]:
            c_plus_d = x - (a+b)
            ans += sum_freq[c_plus_d]

    return ans

arr = [0, 0, 0, 0, 0]
x = 0
print(count_four_pairs_with_sum(arr, x))
Satisfactory answered 30/10, 2022 at 16:57 Comment(7)
Those are not the solutions to the problem. 1. Is solving it for only one quadruple, not all of them. One solution there also assumes no duplicates. 2. Is solving it for the values, not indexes. It also assumes no duplicates.Quicken
Code you added is not a correct answer. First of all, number of all possible quadruples is ~O(N^4), so it is impossible to print them in lesser complexity than that. Secondly, you are printing values, not indexes and you are solving the problem for values. Even if you changed it to printing indexes, it's still not correct. Given an array [0,0,0,0,0,0] and X=0, you would print first (0,1,2,5) then followed by (0,1,3,4). Quadruple (0,1,2,4) would never be printed and counted!Quicken
@Smoksul just figured out the current solution with O(n^3), please checkSatisfactory
"Attribution" in this context means mentioning Kelly Bundy's user name, "Kelly Bundy", explicitly in the body of your answer and saying this code is what they wrote, when you copy and paste it into your answer like you did here.Refectory
A hashmap has O(1) in practice, but O(n) in the worst case. A sorted tree would have O(log n), a large zeroed array O(1).Arbitrary
@Arbitrary in python, the hash function's collisions are only possible for numbers bigger than 2**61 - 1, which is a very big number and the possibility of having a collision is very low. python hash docsSatisfactory
If the hashmap is not initialized with a size, costly resizing occurs often, when adding elements. If you choose an initial size, either you could get collisions (although the hash function gives distinct results), or your hashmap could get larger than N². See also #1299136 and #13515216 for an explanation that the actual hashmap position is hash value mod hashmap size. You probably do not set 2^61 as size.Arbitrary
S
3

We can do it in O(n^2) time and space by dynamically updating.

(See Kelly Bundy's answer for simpler and more efficient space usage.)

Start by creating the hash-map of sum to set of index-pairs that compose it, traversing from the left, and store for each index, the pairs it belongs to (O(n) of them), until the right two elements are left and not hashed.

Now traverse towards the left: starting with the third rightmost element, remove all the pairs the current element belongs to (O(n) of them). Then for each sum the element can create by pairing with an element on its right, add the count of pairs in the corresponding hashed sum that would complete the overall sum. Because we removed all instances where the current element was used on the left, we are guaranteed to have partitioned quadruples, where none on the right are represented in the hashed counts from the left.

Spiv answered 30/10, 2022 at 20:34 Comment(14)
Could you elaborate a bit more? I don't quite understand what you mean by: "Start by creating the hash-map of sum to tuple counts that compose it, traversing from the left, and store for each index, the tuples it belongs to (O(n) of them), until the right two elements are left and not hashed."Quicken
@Smoksul Say we have input [1, 2 ,3 ,4, 5], target 10. When we reach 3 from the left, we have the map {3: {(1,2)}, 4: {(3,1)}, 5: {(3,2)}}. We check 10 - (4+5) = 1 and find no match. Then we start our traversal back towards the left. We remove the tuples 3 is part of in the map so our new map looks like: {3: {(1,2)}}. We find the match 10 - (3+4) = 3 in the map and count the number of tuples associated with it. We then don't find the match 10 - (3 + 5) = 2. And we're done, in this case.Refectory
While it is indeed clever, I don't think this remains O(N^2) in a case where X=0 and all elements are 0. This will make you have a N^2 long list under 0 in the hash-map of sums. Removal of N elements from that list would be O(N^2) in this case. If my calculations are correct, your solution is O(N^3) in this case, which is still satisfying.Quicken
@Smoksul No, that is incorrect. Any one element can only participate in O(n) tuples. We add O(n) tuples for it once traversing to the right. And we remove O(n) tuples for it once going to the left.Refectory
@Smoksul what's being stored for the element are array indexes. The value they are associated with does not matter for the complexity.Refectory
Yes, I understand that. So in case of only zeros and X=0, your map would be {0: all possible tuples}, then each move to the left with removal would need to remove O(N) tuples from the O(N^2) long list in map(0). Thus, removal would be O(N^2).Quicken
After you build the map, for each element (starting from 3rd rightmost) you have to sum it with i elements on the right, and look for corresponding tuple counts in the map. That's i operations. After that you have to remove all the tuples it is in, n tuples, in worst case scenario all those tuples are in (n^2 - i*n) long list. You have to traverse the whole list to remove them. So every step to the left is n^2 + i(1-n) operations. Sum it from i=1 to i=n (n-2), you get n^3 as leading factor.Quicken
@Smoksul you don't traverse a list. You store the tuples in a hashset and remove each in O(1).Refectory
@Smoksul and you only have at most N tuples to add once and to remove once for any element. That's O(n^2) operations overall, each operation O(1).Refectory
So the key for each tuple in the hashset would be its indexes. But also you want to be able to access an element (might be randomly) containing one of the indexes using a key comprised of only one index in O(1) time, while simultaneously being able to remove this element from the hashset in O(1) time. How would such a structure work?Quicken
@Smoksul when we add any one tuple, we have exact knowledge for it: (1) which sum in the hashmap point to the tuple, and (2) the identity of the tuple in the hashset the sum is pointing to. When we unset the tuples for the element, we remove exactly that tuple as part of that O(n) iteration. What is the problem you see?Refectory
@Smoksul for example [1, 2, 3, 4]. When we reach 3, we have the list of tuples it belongs to: [(4, (0,2)), (5, (1,2))], when we unset them, we call unset (0,2) in the hashset for sum 4 and unset (1,2) in the hashset for sum 5. What's the problem?Refectory
Your "hash-map of sum to set of tuples" was unclear to me until I finally understood it from the comments and the bigger picture. The word "tuple" doesn't imply any particular length, the question is about 4-tuples, and your answer doesn't contain an example to clarify that you mean 2-tuples. I suggest you say 2-tuples or pairs. Wikipedia shows several other names for 2-tuples, but I think they're less clear.Galengalena
@KellyBundy oh, geez, I always assumed "tuple" generically referred to a two-element tuple. Renamed to "index-pairs." Thank you for the lesson!Refectory

© 2022 - 2024 — McMap. All rights reserved.