Time complexity of given recursive function is O(1/n) according to me, is it even possible?
Asked Answered
L

2

5

I think the time complexity of below code should be O(1) as worst case can be log 1000 base 2 or something definite. But I am not sure as it's time does vary with input and the given answer is O(n), which I am very confused about how they got that. If we increase n, function gets called fewer times so is it O(1/n)? Is it even possible?

#define LIMIT 1000

    void fun2(int n)
    {
      if (n <= 0)
         return;
      if (n > LIMIT)
        return;
      cout<<n<<" ";
      fun2(2*n);
      cout<<n<<" ";
    }
Laic answered 13/7, 2022 at 12:57 Comment(10)
You have O(logN), not O(1/N). O(1/N) would mean a bigger N makes the code faster and that doesn't happen here. It takes you more "iterations" to get to an N of 1000 then it does for an N of 100.Selfanalysis
this is O(1). Or O(log2(1000)). Or O(log2(LIMIT))Sextuplet
@Laic What does O( 1/n ) mean?!Calvert
It is O(1) since it makes at most ten calls. It is the same as void fun(int n) { if (n <= 0) return; if (n > 10) return; fun(n+1); }Markowitz
@Markowitz but doesn't O(1) means the time does not depend on input size? But the given function does vary with input size upto a certain limit.Laic
@VladfromMoscow I am not sure, I was just trying to make sense of the interpretation that the algo time is decreasing with the size of input.Laic
@appleapple but doesn't O(1) means the time does not depend on input size? But the given function does vary with input size upto a certain limitLaic
"If we increase n, function gets called fewer times" how do you arrive at that conclusion?Mandiemandingo
O(1) doesn’t mean the time does not depend on input size, it means there is some value n’ above which the time does not depend on input size. In this case n’ is log2(1000).Correlative
Are you sure the question doesn't ask about the time complexity being O(log LIMIT)?Elsey
F
12

#define LIMIT 1000 along with the base case of if (n > LIMIT) return; guarantees O(1) because it puts a ceiling on the number of iterations the function can run.

Even if this was #define LIMIT 10e50, it'd still be O(1).

Recall that Big O is concerned with theoretical growth, not with how much work is to be done in practice. If you have a cap on how much the function can grow, regardless of how large that cap may be, it's a constant time operation from a complexity perspective.

Is Big O necessarily a realistic reflection of the work the algorithm does? No. Big O is a scalability heuristic, not the final word on efficiency. All O(1) says here is that once n > LIMIT, you can increase n indefinitely with no additional cost. In the real world, constant factors often matter.

Forcer answered 13/7, 2022 at 15:6 Comment(3)
Well described and interesting point. Even if we remove limit case it will still exit because of overflow in 31/32 steps. Generally, should we consider all similar actually programmed code be O(1) then?Smoothie
Excellent question. I don't think so, because integer size is an implementation detail of the environment rather than an inherent characteristic of the algorithm. The same algorithm in Python wouldn't overflow. Practically speaking, everything is O(1) seen through real-world space limits. But in this code, LIMIT is very explicitly present, and it's a low value to boot, calling attention to itself. If LIMIT it didn't exist, it's O(log(n)) regardless of the integer size (O(n) is clearly incorrect either way).Forcer
The 10e50 is for illustrative purposes, but practically speaking, it wouldn't factor in since no workload would be that large. If the limit is that huge, O(log(n)) seems fair too. But if so, where do you draw the line? Regardless of how you work it out and regardless of the limit value, this is a very efficient algorithm IRL with the bottleneck being cout (a slow system call). I should say I'm not a CS theorist, so I'm sure some of the experts at cs.stackexchange.com would have better insights on all of this.Forcer
I
2

To respond to the individual points you've raised:

I think the time complexity of below code should be O(1) as worst case can be log 1000 base 2 or something definite.

Yep, that's exactly right!

But I am not sure as it's time does vary with input

You are correct that the runtime varies with the input size. However, that does not necessarily mean that the runtime is not O(1). If an algorithm's runtime is always bounded from above by some constant, regardless of what the input size is, then its runtime is O(1). Stated differently, an O(1) runtime means "without even looking at your input, I can bound how long the algorithm is going to take to run." (Technically that isn't 100% accurate, since big-O notation talks about what happens for sufficiently large inputs, but it's true in this case.)

Here's another example of this:

void sillyFunction(int n) {
    for (int i = 0; i < 137 && i < n; i++) {
        cout << '*' << endl;
    }
}

We can guarantee that the loop will run at most 137 times regardless of what n is. However, for small values of n, we may do less work than this. But the runtime here is still O(1), since we have that bound of "at most 137 iterations."

Here's another example:

void amusingFunction(int n) {
    for (int i = 137; i >= 0 && i >= n; i++) {
        cout << '*' << endl;
    }
}

Again, this loop is guaranteed to run at most 137 times. Here, though, the work decreases as we increase n, to the point where the loop never runs when n ≥ 137. But since we can bound the total number of loop iterations at at most 137 without even looking at n, the runtime is O(1).

Here's a trickier example:

void deviousFunction(int n) {
    if (n <= 137) {
         while (true) { // infinite loop!
             cout << '*';
         }
    }
    cout << "Yup." << endl;
}

This function will go into an infinite loop for any n ≤ 137. However, for sufficiently large values of n (namely, when n > 137), the algorithm always terminates immediately. This algorithm therefore has a runtime of O(1): there's a constant amount of work where, for any sufficiently large n, the algorithm does at most that much work. (This is highly contrived and I've never seen anything like this before, but you get the picture.)

and the given answer is O(n), which I am very confused about how they got that.

The runtime bound here of O(n) to me seems incorrect. It's technically not wrong to say the runtime is O(n) because that does provide a correct bound on the runtime, but it's not tight. You should ask whoever gave you this bound to explain their reasoning; perhaps there's a typo in the code or in the explanation?

If we increase n, function gets called fewer times so is it O(1/n)? Is it even possible?

As n increases, the number of recursive calls is nonincreasing, but it doesn't necessarily decrease. For example, fun2(1000) and fun2(10000000) each result in a total of one call being made.

It's not possible for an algorithm to have a runtime of O(1 / n) because all algorithms do at least a constant amount of work, even if that work is "set up the stack frame." A runtime bound of O(1 / n) means that, for sufficiently large n, you would be doing less than one unit of work. So in that sense, there's a difference between "the runtime drops as n gets bigger, to the point where it flattens out at a constant" and "the runtime is O(1 / n)."

Ineradicable answered 13/7, 2022 at 18:28 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.