How to calculate the cost of an algorithm?
Asked Answered
G

2

7

I have to calculate the cost of this algorithm.

I was thinking about an exponential cost. I tried the recurrence relation. 4*T(n/4) + c*n and at the end it's ((4^n) - 1)/3.

Is that correct? Are there other methods to calculate it?

int m(int a[][]) {
    return m1(a, 0, a.length-1, 0, a[0].length-1);
}

int m1(int a[][], int l1, int l2, int c1, int c2) {
  if(c1 > c2 || l1 > l2) return 0;
  if(c1 == c2 && l1 == l2) return a[l1][c1];
  int c = (c1+c2)/2,
      l = (l1+l2)/2;

  return m1(a, l1, l, c1, c) + 
         m1(a, l1, l, c+1, c2) + 
         m1(a, l+1, l2,c1, c) + 
         m1(a, l+1, l2, c+1, c2);
}
Gomar answered 2/7, 2018 at 9:50 Comment(5)
How do you call your function. Do you call m or m1?Piceous
I need the cost of mGomar
That basically is the cost of m1Gomar
i wonder how to express that "merge" part's (f (n)'s) complexity, i.e. the addition of 4 values returned by recursive calls, in terms of "n". On top level there are 4 addends, on 2nd level there are 4^2 addends, on Nth level there are 4^N addends, with N having log4_n at most, which means that there are exactly "n" addends on the lowest level. So if we accept the complexity of addition as O(n), then T(n) = 4T(n/4)+O(n) is correct, but we cannot apply master theorem here because f(n)'s complexity is not theta(n), it is lowerPerceval
basically we could get away without even applying master theorem by taking into account that there are log4_n levels in this recursion and if we take into account that the maximum number of addends on a single level is equal to "n" then the complexity becomes O(n * log4_n)Perceval
S
0

As far as I can tell: At each stage, we divide the problem space into four parts. To combine the results from the sub-problems, we are adding four integers together and this operation happens in constant time O(1).

Assume n is the height of the matrix a in the question, and m is the width of the matrix a in the question

We can visualize the complexity as a 4-nary tree with n x m leaf-nodes and height log4(nm). . The total number of nodes in the tree is O(nm), so the complexity is O(nm).

Without parallelism, this recursive algorithm shouldn't be faster than a sequential algorithm on the matrix.

Stenotype answered 8/1, 2021 at 9:48 Comment(0)
M
0

A recurrence relation takes the form:

T(n) = A * T(n / B) + O(f(n))

Where the A * T(n / B) represents the represents the recursive call to the function and O(f(n)) represents the work done during each call to the function.

For this case, you can clearly divide the code into two components:

  1. The work done during the call to the function:
if(c1 > c2 || l1 > l2) return 0;
if(c1 == c2 && l1 == l2) return a[l1][c1];
int c = (c1+c2)/2,
    l = (l1+l2)/2;
  1. And the recursive calls to the function:
  return m1(a, l1, l, c1, c) + 
         m1(a, l1, l, c+1, c2) + 
         m1(a, l+1, l2,c1, c) + 
         m1(a, l+1, l2, c+1, c2);

Take n = |a|. When you make your recursive call, each call is operating on a disjoint quadrant of the matrix input to the call. You make 4 such recursive calls, so your recurrence relation starts like this:

T(n) = 4 * T(n / 4) ...

Then, you readjust your attention to how much work is done in each call to the function. In this case, the work is O(1). Recall that we evaluate the Big O membership of a function based on the size of its input. In this case, the size of the input is the size of the matrix a, which we have quantified as n. So the question you should ask yourself as you try to assess how much work is done in each call to this function is, "As the n becomes arbitrarily large, how does this specific operation's cost change with it?"

Let's answer that question for each of the operations:

if(c1 > c2 || l1 > l2) return 0;

I think that most computer science professors who are teaching introductory algorithms and/or data structures will say that as the size of a increases, the cost of doing this math will remain the same. I tend to agree with this philosophy since folks are just learning this stuff for the first time.

However, some folks will say that the cost of these arithmetic operations are actually O(m) where m is the number of bits in the larger of the two numbers. (Using m because n is taken.) This is technically true, and I'll explain the "answer" for both approaches below.

Assuming that basic arithmetic is O(1)...

In this case, all of your index math takes O(1) time, so your recurrence relation becomes:

T(n) = 4 * T(n / 4) + O(1)

Which works out to O(n) time. If you break n down into a's dimensions, it's O(hw) where h is a' height and w is a's width. Easy enough, right?

Assuming that basic arithmetic is O(m)...

We've entered a whole new layer of complexity -- congratulations! To move on from here, we need to think about what m represents. As noted above, m is the number of bits in the larger of the two numbers in any addition or subtraction operation. (Addition and subtraction underpin the >, <, ==, &&, and || operations.) The question we need to ask: As n grows arbitrarily large, how will the number of bits in these indices change with it?

Obviously, if n is really big, then you'll need numbers with more bits to express the indices to access the n values in a. Conversely, if n remains very small, you'll require a much smaller number of bits to represent your indices. But what exactly is the asymptotic relationship between n and m?

Well, suppose a were a 1 by n matrix. Then you'd need indices that can go all the way up to n - 1, correct? Since we're using bits, that means our indices will need to have at least O(log n) bits (there is a cool way to compute the exact number of bits needed, but that's out of scope here).

So, we can be a little pessimistic (i.e. slightly overestimate) and say that given any matrix a with magnitude n, we need all indices to have O(log n) bits. In other words, m = O(log n). That means that each basic arithmetic operation (addition, subtraction, bit-wise operations) would actually cost O(m) = O(log n) time. So now your recurrence relation becomes:

T(n) = 4 * T(n / 4) + O(log n)

... which still evaluates to O(n) time! (Also known as O(hw) as stated earlier.)

Hope this helps! :)

Microcosm answered 5/10, 2022 at 17:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.