Recursive Algorithm Time Complexity: Coin Change
Asked Answered
P

3

7

I'm going through some algorithms, and came across the coin change problem.

When thinking about the problem I came up with this naive recursive solution:

int coinChange(const vector<int>& coins, int start, int n) {
  if (n == 0) return 1;
  if (n < 0) return 0;

  int total = 0;

  for (int i = start; i < coins.size(); ++i) {
    if (coins[i] <= n) total += coinChange(coins, i, n-coins[i]);
  }

  return total;
}

I then realized the "accepted" solution was as follows:

int count( int S[], int m, int n )
{
    // If n is 0 then there is 1 solution (do not include any coin)
    if (n == 0)
        return 1;

    // If n is less than 0 then no solution exists
    if (n < 0)
        return 0;

    // If there are no coins and n is greater than 0, then no solution exist
    if (m <=0 && n >= 1)
        return 0;

    // count is sum of solutions (i) including S[m-1] (ii) excluding S[m-1]
    return count( S, m - 1, n ) + count( S, m, n-S[m-1] );
}

At first I thought the two were essentially the same. It was clear to me that my recursion tree was much wider, but it seemed that this was only because my algorithm was doing more work at each level so it evened out. It looks like both algorithms are considering the number of ways to make changes with the current coin (given it is <= the current sum), and considering the number of ways to make change without the current coin (thus with all the elements in the coin array minus the current coin). Therefore the parameter start in my algorithm was doing essentially the same thing as m is doing in the second algorithm.

The more I look at it though, it seems that regardless of the previous text, my algorithm is O(n^n) and the second one is O(2^n). I've been looking at this for too long but if someone could explain what extra work my algorithm is doing compared to the second one that would be great.

EDIT

I understand the dynamic programming solution to this problem, this question is purely a complexity based question.

Proa answered 18/7, 2016 at 2:4 Comment(6)
It's maybe beside the point, but although the algorithms are O(2^n), that's not a tight bound. That is, no matter the value of S[], the algorithm is not Theta(2^n). For example, when m=2, the worst case is S=[1, 2] and the complexity is Theta(Fib(n)) = Theta(phi^n) where phi is the golden ratio.Nonchalant
@PaulHankin Good point. So to say these algorithms are Theta(2^n), it is my understanding we'd need to actually find an n_sub_0 such that f(n) = O(2^n) and f(n) = Omega(2^n), for all n >= n_sub_0, can this be done in this particular situation?Proa
Dom, I'm not sure I understand what you're asking. If you're asking if it's possible to find a constant c such that c * phi^n >= 2^n for all large n, then it's relatively easy to show that it's not.Nonchalant
@PaulHankin Sorry let me try to clear that up. I'm wondering if its possible to either put a tight bound on this algorithm such that f(n) = Theta(x) OR if we can show that for all inputs larger than some threshold, f(n) = Theta(2^n). I bring this up because it seems your example introduces some small input to show that f(n) = is not Theta(2^n) but instead Theta(phi^n). Can we determine after what size inputs (array size and/or change value) the algorithm is indeed Theta(2^n)?Proa
I believe there's no input for which it's Theta(2^n), although it gets closer and closer as the number of coins gets larger. I believe (with some evidence, but no rigorous proof) that the complexity is Theta(r^n) where r+r^(-|S|) = 2 and 1 <= r < 2.Nonchalant
Yeah the closer and closer bit is what I was thinking too.. thanksProa
N
4

The two pieces of code are the same except that the second uses recursion instead of a for loop to iterate over the coins. That makes their runtime complexity the same (although the second piece of code probably has worse memory complexity because of the extra recursive calls, but that may get lost in the wash).

For example, here's partial evaluation of the second count in the case where S = [1, 5, 10] and m=3. On each line, I expand the left-most definition of count.

  count(S, 3, 100)
= count(S, 2, 100) + count(S, 3, 90)
= count(S, 1, 100) + count(S, 2, 95) + count(S, 3, 90)
= count(S, 0, 100) + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)
= 0 + count(S, 1, 99) + count(S, 2, 95) + count(S, 3, 90)

You can see that this is the same calculation as your for-loop that sums up total.

Both algorithms are terrible because they run in exponential time. Here is an answer (of mine) that uses a neat dynamic programming method that runs in O(nm) time and uses O(n) memory, and is extremely concise -- comparable in size to your naive recursive solution. https://mcmap.net/q/1625032/-memoizing-coin-change . It's in Python, but it's trivially convertable to C++.

Nonchalant answered 18/7, 2016 at 3:8 Comment(3)
Thanks very much. Yeah I understand the DP approaches to this problem I was just purely interested in finding out if the recursive versions have the same complexity because I got to the point where I convinced myself that it could go either way haha.Proa
Nice py solution btwProa
Very curious on how we can draw recurrent relations fr the two above? @PauLHankinYung
C
1

You didn't read the whole article (?).

The idea behind dynamic programming is that you store some values you already got and that way you don't need to calculate them again. In the end of the article you can see the actual correct solution.

As for why your solution is n^n and their original one is 2^n. Both solutions actually are 2^(n+#coins). They just call the function with m-1, instead of having a cycle that goes trough every coin. While your solution tries every coin at the start and then less and less, theirs tries to take one coin of type m, then another, then another, until at some point it switches to type m-1 and does the same with it and so on. Basically both solutions are the same.

Another way to prove that they have the same complexity is like this:

Both solutions are correct, so they will reach all possible solutions, and both stop growing a particular branch of the recursion the moment it reaches a negative n. Therefore, they have the same complexity.

And if you are not convinced just try each solution except add some counter and increment it every time you enter the function. Do this for each solution and you will see that you get the same number.

Carmencarmena answered 18/7, 2016 at 3:8 Comment(3)
I did read the whole article. My question is revolving around the recursive portion of the article, not the DP portion. I understand the DP portion of the article as well, I was just a little confused as to whether my recursive version had the same complexity as theirs. This was more a general complexity question than anything. Thanks!Proa
I see. I just wanted to be sure that you understood the DP solution. Anyway, did I manage to answer your question or do I need to explain some part more?Carmencarmena
No I think you cleared up my confusion. I got to the point where I could sorta see it being an n^n algo even though it appeared to be doing the same amount of work as a solution that was clearly 2^n. Thanks!Proa
B
0

Benchmark On my computer benchmarks follows:

coinChange(v, 0, 500);// v=[1, 5, 10, 15, 20, 25]

took 1.84649s to complete. But

count(s, 6, 500); //s = [1, 5, 10, 15, 20, 25]

took 0.853075s to execute.
EDIT
I interpret the result as the time complexity of two algorithm are the same.

Boehmenism answered 18/7, 2016 at 4:28 Comment(3)
Don't get me started on how theoretical algorithm complexity is thrown out of the window when applied practically... The information provided here is likely to become antiquated and as such, the usefulness is likely to diminish. Leave it if you like, but I wouldn't be happy with this as an answer to the question... Think about what the question is asking.Saliva
If you are going to benchmark, you should assess how the time grows as the inputs grow. That will better suggest a runtime. Right now these two numbers don't tell one anythingBonni
@KartikChugh Yeah.Boehmenism

© 2022 - 2024 — McMap. All rights reserved.