A recurrence relation takes the form:
T(n) = A * T(n / B) + O(f(n))
Where the A * T(n / B)
represents the represents the recursive call to the function and O(f(n))
represents the work done during each call to the function.
For this case, you can clearly divide the code into two components:
- The work done during the call to the function:
if(c1 > c2 || l1 > l2) return 0;
if(c1 == c2 && l1 == l2) return a[l1][c1];
int c = (c1+c2)/2,
l = (l1+l2)/2;
- And the recursive calls to the function:
return m1(a, l1, l, c1, c) +
m1(a, l1, l, c+1, c2) +
m1(a, l+1, l2,c1, c) +
m1(a, l+1, l2, c+1, c2);
Take n = |a|
. When you make your recursive call, each call is operating on a disjoint quadrant of the matrix input to the call. You make 4 such recursive calls, so your recurrence relation starts like this:
T(n) = 4 * T(n / 4) ...
Then, you readjust your attention to how much work is done in each call to the function. In this case, the work is O(1)
. Recall that we evaluate the Big O membership of a function based on the size of its input. In this case, the size of the input is the size of the matrix a
, which we have quantified as n
. So the question you should ask yourself as you try to assess how much work is done in each call to this function is, "As the n
becomes arbitrarily large, how does this specific operation's cost change with it?"
Let's answer that question for each of the operations:
if(c1 > c2 || l1 > l2) return 0;
I think that most computer science professors who are teaching introductory algorithms and/or data structures will say that as the size of a
increases, the cost of doing this math will remain the same. I tend to agree with this philosophy since folks are just learning this stuff for the first time.
However, some folks will say that the cost of these arithmetic operations are actually O(m)
where m
is the number of bits in the larger of the two numbers. (Using m
because n
is taken.) This is technically true, and I'll explain the "answer" for both approaches below.
Assuming that basic arithmetic is O(1)...
In this case, all of your index math takes O(1) time, so your recurrence relation becomes:
T(n) = 4 * T(n / 4) + O(1)
Which works out to O(n)
time. If you break n
down into a
's dimensions, it's O(hw)
where h
is a
' height and w
is a
's width. Easy enough, right?
Assuming that basic arithmetic is O(m)...
We've entered a whole new layer of complexity -- congratulations! To move on from here, we need to think about what m
represents. As noted above, m
is the number of bits in the larger of the two numbers in any addition or subtraction operation. (Addition and subtraction underpin the >
, <
, ==
, &&
, and ||
operations.) The question we need to ask: As n
grows arbitrarily large, how will the number of bits in these indices change with it?
Obviously, if n
is really big, then you'll need numbers with more bits to express the indices to access the n
values in a
. Conversely, if n
remains very small, you'll require a much smaller number of bits to represent your indices. But what exactly is the asymptotic relationship between n
and m
?
Well, suppose a
were a 1 by n
matrix. Then you'd need indices that can go all the way up to n - 1
, correct? Since we're using bits, that means our indices will need to have at least O(log n)
bits (there is a cool way to compute the exact number of bits needed, but that's out of scope here).
So, we can be a little pessimistic (i.e. slightly overestimate) and say that given any matrix a
with magnitude n
, we need all indices to have O(log n)
bits. In other words, m = O(log n)
. That means that each basic arithmetic operation (addition, subtraction, bit-wise operations) would actually cost O(m) = O(log n)
time. So now your recurrence relation becomes:
T(n) = 4 * T(n / 4) + O(log n)
... which still evaluates to O(n)
time! (Also known as O(hw)
as stated earlier.)
Hope this helps! :)
m
orm1
? – Piceous