More efficient algorithm to find OR of two sets
Asked Answered
P

5

7

Given a matrix of n rows and m columns of 1's and 0's, it is required to find out the number of pairs of rows that can be selected so that their OR is 11111....m times.

Example:

1 0 1 0 1
0 1 0 0 1
1 1 1 1 0

Answer:

2 ---> OR of row number [1,3] and [2,3]

Given n and m can be an order upto <= 3000, how efficiently can this problem be solved?

PS: I already tried with a naive O(n*n*m) method. I was thinking of a better solution.

Pedicular answered 3/6, 2017 at 20:47 Comment(14)
Seems like the wrong place to ask this, as it's not a programming question.Secluded
@DavidMakogon can you suggest a better place to ask this question. Because I thought this is the place to ask algorithmic questions.Pedicular
There's a math stackexchange - maybe there? But, as written, it's off-topic here - you have no code, and it doesn't look like you're searching for a programming solution, just an algorithm.Secluded
If you make each row a binary string, you can do the same thing in O(n * n) time.Nola
@Shiva comparing String is O(m)Perished
@ThijsSteel Typo. I meant binary integers.Nola
@Shiva but the range of m is 3000. You can't store it in an integer.Pedicular
@Pedicular but shiva is right. You can't improve the complexity. But it could improve efficienty by a factor of 32.Perished
You can also do another improvement but checking line by line. And stopping at the first fail. I don't feel like doing the math, but it should give you another factor 2Perished
Algorithmic questions are not off-topic here, particularly if they're about finding a more efficient algorithm to a known problem.Hothouse
If its problem from programming competition, are you sure that there is not any other constraint on inputs?Willtrude
@ShashwatKumar Actually its a programming question and the O(n*n*m) solution passed. I was just curious for a better solution :).....or searching for a proof that no further optimization is possible. And yes the number of test cases were <= 10. Time of running <= 1sec.Pedicular
@Pedicular I just added point 5 to my answer below, which suggests that no further optimisation seems likely in the worst case. Not really a proof, but I hope my argument makes sense?Neckband
@Neckband ... Yes. After a great deal of analysis and thought I too have come to a conclusion that it is not possible to beat the n^2*m complexity. Was expecting a mathematical proof for that but I think some intuition and analysis will suffice. That's why I have marked your answer as correct :) . It presented sufficient analysis for the maximum optimization possible. Thank you !Pedicular
N
3

1. trivial solution The trivial algorithm (which you already discovered but did not post) is to take all (n choose 2) combinations of the n rows, OR them, and see if it works. This is O(n^2 * m). Coding would look like:

for (i = 0; i < n; ++i)
  for (j=i+1; j < n; ++ j) {
    try OR of row[i] with row[j] to see if it works, if so record (i,j)
  }

2. constant speedup You can improve the running time by a factor of the word size by packing bits into the words. This still gives same asymptotic, but in practice a factor of 64-bit speedup on a 64-bit machine. This has already been noted in comments above.

3. heuristic speedup We can do heuristics to further improve the time in practice, but no asymptotic guarantee. Consider sorting your rows by hamming weight, with smallest hamming weight in the front and largest hamming weight at the end (running time O(m * n * log m )). Then you only need to compare low weight rows with high weight rows: specifically, the weights need to be >= m. Then the search would look something like this:

for (i = 0; i < n; ++i)
  for (j=n-1; j > i; --j) /* go backwards to take advantage of hmwt */
  {
    if ( (hmwt(row[i]) + hmwt(row[j])) < m)
      break;
    try OR of row[i] with row[j] to see if it works, if so record (i,j)
  }

4. towards a better approach Another approach that may offer better returns is to choose a column of low hamming weight. Then combine the rows into two groups: those with a 1 in that column (group A) versus those with a 0 in that column (group B). Then you only need to consider combinations of rows where one come from group A and the other comes from group B, or both come from group A (Thanks @ruakh for catching my oversight). Something along these lines should help a lot. But again this is still asymptotically same worse case, however should be faster in practice (assuming that we're not in the case of all combinations being the answer).

5. the limits of what can be done It is easy to construct examples where the number of vector pairs that work is O(n^2), and therefore it feels very hard to beat O(m*n^2) worse case. What we should be seeking is a solution that is somehow related to the number of pairs that work. The heuristics above are going this direction. If there is a column with small hamming weight h, then point 4 above brings the running time down to O(h*n*m + h^2*m). If h is significantly smaller than n, then you get big improvements.

Neckband answered 3/6, 2017 at 21:12 Comment(12)
OP has already said that he expects algorithm better than what you suggested.Willtrude
And your packing solution has allready been suggested by ShivaPerished
@ThijsSteel yes I see it in the comments now.Neckband
But isn't the worst case scenario still O(n^2*m) since I can make sure there are no rows such that the weight is not less than m/2.Pedicular
@Pedicular Agree. It is just a heuristic that works in practice assuming the data is not stacked against us. However see my last suggestion about grouping by a column with low hmwt. It is also same asymptotic worse case but certain to be faster in practice.Neckband
Re: "Then combine the rows into two groups: those with a 1 in that column (group A) versus those with a 0 in that column (group B). Then you only need to consider combinations of rows where one come from group A and the other comes from group B": Well, you also need to support the case that both are in A.Usk
@Neckband i'm not sure you can assume that the solution size implies O(m*n²) worst case. Take my algoritm for a matrix with no zeros. Solution is still O(n²), but the amount of work should be O(log(n)*m*n)Perished
@ThijsSteel For a matrix of all 1's, my algorithm works in O(m*n).Storage
And even a slightly more complicated answer where each row has exactly one zero at a different position will still be more efficient.Perished
@ThijsSteel for one zero per row in a different position, my algorithm would work in O(n^2 + m*n)Storage
I totally agree that you can do better than worse case if the data fits some desired pattern. My expectation is that there are a large portion of cases where the number of solutions is O(n^2) yet the data doesn't fit some simple pattern.Neckband
The secret is not in "a simple pattern". It's that the question is not the actual combinations. It's the number of combinations. Solutions that count the number of combinations without explicitly checking each one can be more efficient. My algoritm would be O(n²) for those special cases too if the actual combinations had to be calculated.Perished
P
2

Expanding on TheGreatContini's idea:

First try

Let's look at it as finding combinations that belong to AxB, with A and B sets of rows. These combinations must satisfy the or condition, but we'll also assume the hamming weight of a is at least as large as b (to avoid some duplicates).

Now split A into A0 (rows that start with 0) and A1 (rows that start with 1). Do the same for B. We have now reduced the problem into three smaller problems: A0xB1, A1xB1 and A1xB0. If A and B were the same, A0xB1 and A1xB0 are the same, so we only need to do one. Not only are these three subproblems combined smaller than the first, we've also fully checked the first column and can ignore it from now on.

To solve these subproblems, we can use the same approach, but now with column 2, 3, ... At some point, either we will have checked all the columns or #A and #B will be 1.

Depending on the implementation, stopping sooner might be more efficient. At that point we can do an exhaustive check of the remaining combinations. Remember though that if we have already checked k columns, that this will only cost m-k per combination.

Better column selection

As TheGreatContini suggested, instead of selecting the first column, we can select the column that would lead to the smallest subproblems. The cost of finding this column at each step is rather high, but the weights could be calculated once in the beginning and then used as an estimate for the best column. We can then rearrange the columns use the algorithm as normal, after it is done, rearrange them again.

The exact best column would be the column for which the amount of zeros in A times the amount of zeros in B is maximal.

Hamming weight pruning

We know that the sum of the hamming weights of a and b must be at least m. And since we assumed a to be the highest hamming weight, we can remove all the values of a that have a hamming weight less than m/2. (the speedup this gives might be negligable, i'm not sure). Calculating all the hamming weights costs O(m*n).

Efficient Splitting

If we sort the rows, the splitting in groups can be done much faster using the bisection algorithm. This can also lead to an efficient representation of the sets in memory. We can just specify the minimum and maximum row. Sorting can be done in O(n*m*log(n)). Splitting can then be done in about log(n).

Here's some code that won't compile, but should give the right idea.

private List<Row> rows;
public int findFirstOne(int column, int start, int end){
    if(rows.get(start).get(column) == 1) return start;
    if(rows.get(end).get(column) == 0) return -1;
    while(start < end){
        int mid = (start+end)/2;
        if(rows.get(mid).get(column) == 0){
            start = mid+1;
        }else{
            end = mid;
        }
    }
    return start;
}

Complexity

In the following calculations the effect of the better column selection is ignored as it will give little improvement on worst case efficienty. However, in average cases it can give a massive improvement by reducing the search space as soon as possible and thereby making the other checks faster.

The algorithms run time is bounded by n²m. However, the worst examples i have found are all O(n*log(n)*m).

First, the sorting of the matrix will be O(n*log(n)*m) for the rows and optionally, sorting the columns will be O(n*m + m*log(m)).

Then, the creating of the subproblems. Let's make an overestimate first. We need to subdivise at most m times and the cost for a full level of of subdivisions at depth i can be overestimated as log(n)*3^i (cost per subdivision times number of subdivisions). This leads to O(log(n)*3^m) in total.

It must also hold that 3^i <= n²/2, because this is the maximum number of combinations possible, so for large m it caps at O(n²*log(n)*m). I struggle to find an example that actually behaves like this though.

I think it is reasonable to assume many of the subproblems become trivial very early. Leading to O(log(n)*m*n) (if someone wants to check this, i'm not really sure about it).

Perished answered 4/6, 2017 at 0:39 Comment(0)
H
2

Here's an off-the-wall idea that might have even worse asymptotic (or even average) behavior -- but it generalizes in an interesting way, and at least offers a different approach. This problem can be viewed as an exact cover problem. Each of the n rows contains a set of values S from the set {1,2,...,m}, corresponding to the column indices for which the row has the value 1. The task of the problem is to find a collection of rows whose sets form a disjoint partition of {1,2,...m}. When there are only two such rows in an exact cover, these rows are binary opposites of the kind that you are looking for. However, there are more complicated exact covers possible, such as one involving three rows:

0 1 0 0 1
1 0 0 0 0
0 0 1 1 0

The exact cover problem looks for all such exact covers, and is an NP-complete problem. The canonical solution is Algorithm X, created by Donald Knuth.

Hull answered 4/6, 2017 at 1:33 Comment(0)
H
2

If I'm not mistaken, the following should be O(n*m):

  • For each column, compute the set of indices of rows that have a "1" at this column, and store this as a mapping from the column index to the set of row indices
  • For each row, compute the set of row indices that could "complete" the row (by adding "1"s in the columns where the row has a "0"). This can be done by computing the intersection of the sets that have been computed in step 1 for the respective column
  • Count the completing row indices

For your example:

1 0 1 0 1
0 1 0 0 1
1 1 1 1 0
  1. The indices of the rows that have a "1" at each column are

    • Column 0: [0, 2]
    • Column 1: [1, 2]
    • Column 2: [0, 2]
    • Column 3: [2]
    • Column 4: [0, 1]
  2. The unions of all sets of indices that are used for "filling" each row are

    • Row 0: [2]
    • Row 1: [2]
    • Row 2: []

Which is 2 in total.

The main reason of why one could argue about the running time is that the computation of the intersections of m sets with a size of at most n could be considered to be O(m*n), but I think that the sizes of these sets will be limited: The entries are either are 1's or 0's, and when there are many 1s (and the sizes are large), then there are fewer sets to intersect, and vice versa - but I didn't do a rigorous proof here...


A Java-based implementation that I used for playing around with this (and for some basic "tests") :

import java.util.ArrayList;
import java.util.Arrays;
import java.util.LinkedHashMap;
import java.util.LinkedHashSet;
import java.util.List;
import java.util.Map;
import java.util.Random;
import java.util.Set;

public class SetOrCombinations
{
    public static void main(String[] args)
    {
        List<Integer> row0 = Arrays.asList(1, 0, 1, 0, 1);
        List<Integer> row1 = Arrays.asList(1, 1, 0, 0, 1);
        List<Integer> row2 = Arrays.asList(1, 1, 1, 1, 0);
        List<Integer> row3 = Arrays.asList(0, 0, 1, 1, 1);
        List<List<Integer>> rows = Arrays.asList(row0, row1, row2, row3);
        run(rows);

        for (int m = 2; m < 10; m++)
        {
            for (int n = 2; n < 10; n++)
            {
                run(generateRandomInput(m, n));
            }
        }
    }

    private static void run(List<List<Integer>> rows)
    {
        int m = rows.get(0).size();
        int n = rows.size();

        // For each column i: 
        // Compute the set of rows that "fill" this column with a "1"
        Map<Integer, List<Integer>> fillers =
            new LinkedHashMap<Integer, List<Integer>>();
        for (int i = 0; i < m; i++)
        {
            for (int j = 0; j < n; j++)
            {
                List<Integer> row = rows.get(j);
                List<Integer> list = 
                    fillers.computeIfAbsent(i, k -> new ArrayList<Integer>());
                if (row.get(i) == 1)
                {
                    list.add(j);
                }
            }
        }

        // For each row, compute the set of rows that could "complete"
        // the row (by adding "1"s in the columns where the row has
        // a "0"). 
        int count = 0;
        Set<Integer> processedRows = new LinkedHashSet<Integer>();
        for (int j = 0; j < n; j++)
        {
            processedRows.add(j);
            List<Integer> row = rows.get(j);
            Set<Integer> completers = new LinkedHashSet<Integer>();
            for (int i = 0; i < n; i++)
            {
                completers.add(i);
            }
            for (int i = 0; i < m; i++)
            {
                if (row.get(i) == 0)
                {
                    completers.retainAll(fillers.get(i));
                }
            }
            completers.removeAll(processedRows);
            count += completers.size();
        }

        System.out.println("Count "+count);
        System.out.println("Ref.  "+bruteForceCount(rows));
    }

    // Brute force
    private static int bruteForceCount(List<List<Integer>> lists)
    {
        int count = 0;
        int n = lists.size();
        for (int i = 0; i < n; i++)
        {
            for (int j = i + 1; j < n; j++)
            {
                List<Integer> list0 = lists.get(i);
                List<Integer> list1 = lists.get(j);
                if (areOne(list0, list1))
                {
                    count++;
                }
            }
        }
        return count;
    }

    private static boolean areOne(List<Integer> list0, List<Integer> list1)
    {
        int n = list0.size();
        for (int i=0; i<n; i++)
        {
            int v0 = list0.get(i);
            int v1 = list1.get(i);
            if (v0 == 0 && v1 == 0)
            {
                return false;
            }
        }
        return true;
    }


    // For testing
    private static Random random = new Random(0);
    private static List<List<Integer>> generateRandomInput(int m, int n)
    {
        List<List<Integer>> rows = new ArrayList<List<Integer>>();
        for (int i=0; i<n; i++)
        {
            List<Integer> row = new ArrayList<Integer>();
            for (int j=0; j<m; j++)
            {
                row.add(random.nextInt(2));
            }
            rows.add(row);
        }
        return rows;
    }

}
Hb answered 4/6, 2017 at 3:29 Comment(3)
I think you have a few mistakes in your worked example, but setting those aside . . . I think this is still worst-case O(mn²). To see why, consider the case that each row is filled two-thirds with 1s and one-third with 0s, and likewise for each column.Usk
@Usk You're right, I had edited the given example manually for some tests. Fixed it now. Beyond that: Again, I did not do a rigorous proof, and this might be hard. My estimation was roughly based on the fact that there are no "obvious n*n*m loops" involved, and subject to the disclaimer that the actual running time might depend on the complexity of the set operations (which, in turn, depend on the bit patterns). Or to be even clearer: I am not sure if this is O(nm), and wonder if one can proof that an O(nm) solution exists at all...Hb
If we assume randomly distributed ones and zeros. Then the Running time should be about O(n^2*(m-k)/m*k) with k the average number of zeros per row. Meaning this algorithm will do well for very low or high values of k. But still costs about n^2m/4 for values inbetween.Perished
S
0

Here is an algorithm that takes advantage of the knowledge that two rows with a zero in the same column are automatically disqualified as partners. The less zeros we have in the present row, the less we visit other rows; but also the more zeros we have overall, the less we visit other rows.

create two sets, one with a list of indexes of all rows, and the other empty
assign a variable, total = 0

Iterate over each row from right to left, from the bottom row to the top (could be in another order, too; I just pictured it that way).

while row i is not the first row:
  call the non-empty set A and the empty set dont_match
  remove i, the index of the current row, from A

  traverse row i:
    if A is empty:
      stop the traversal
    if a zero is encountered:
      traverse up that column, visiting only rows listed in A:
        if a zero is encountered:
          move that row index from A to dont_match

  the remaining indexes in A point to row partners to row i
  add their count to total and move the elements from the
  shorter of A and dont_match to the other set

return total
Saxony answered 4/6, 2017 at 16:8 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.