Run-time complexities for recursive algorithms
Asked Answered
B

5

8

I've searched high and low and can't seem to find a lot of material related to run-time complexities, recursion, and java.

I'm currently learning run-time complexities and Big-O notation in my Algorithms class, and I'm having trouble analyzing recursive algorithms.

private String toStringRec(DNode d)
{
   if (d == trailer)
      return "";
   else
      return d.getElement() + toStringRec(d.getNext());
}

This is a recursive method that will simply iterate though a doubly-linked list and print out the elements.

The only thing I can come up with is that it has a run-time complexity of O(n), since the number of recursive method calls will depend on the number of nodes in the DList, but I still don't feel comfortable with this answer.

I'm not sure whether I should be accounting for the addition of d and d.getNext().

Or am I just completely off track and the run-time complexity is constant, since all its doing is retrieving elements from the DNodes in the DList?

Bedspread answered 2/3, 2012 at 21:34 Comment(1)
See: #1126888Earleanearleen
A
3

At the first glance, this looks like a classic case of tail recursion modulo cons, a generalization of tail call. It is equivalent to a loop with the number of iterations.

However, it is not that simple: the tricky thing here is the addition of d.getElement() to a growing string: this is in itself a linear operation, and it is repeated N times. Hence the complexity of your function is O(N^2).

Assurbanipal answered 2/3, 2012 at 21:48 Comment(3)
Hmm, I thought d.getElement() was to get the data stored at node d. He needs make his question a bit clearer on this i guess...Impassive
@XiaoChuanYu No, d.getElement() is O(1). It is the string concatenation that follows that is linear.Assurbanipal
Yes, thank you for not ignoring the cost of string concatenation. This is exactly right. The gauss sum 1+2+...+n comes into play, and that's where the quadratic comes from.Earleanearleen
N
2

If T(n) is the number of elementary operations (in this case - when we enter the body of the function, any of the lines inside are executed at most once and all but the second return is not O(1)) executed by calling toStringRec on a list of n elements, then

  T(0) = 1  - as the only things that happen is the branch instruction and a
              return
  T(n) = n + T(n-1) for n > 0 - as the stuff which is being done in the
              function besides calling toStringRec is some constant time stuff and
              concatenating strings that takes O(n) time; and we also run
              toStringRec(d.getNet()) which takes T(n-1) time

At this point we have described the complexity of the algorithm. We can now compute the closed form for T, T(n) = O(n**2).

Nonferrous answered 2/3, 2012 at 21:49 Comment(1)
No: the "stuff done in the function" is not O(1). It is work that takes time proportional to n, assuming the size of the string at each element is non-empty. Hence, the closed form for T(n) ends up looking like T(n) = 1C + 2C + 3C + ... + nC for some constant C. That's a Gauss sum. T(n) is quadratic, not linear.Earleanearleen
F
2

This is a pretty simple example, but the trick is to define a recurrence relation, which is a function of the runtime of a given input size in terms of smaller input sizes. For this example, assuming that the work done at each step takes constant time C and assuming that the base case does no work, it would be:

T(0) = 0
T(n) = C + T(n-1)

You can then solve for running time using substitution to find a series:

T(n) = C + T(n-1) = 2C + T(n-2) = 3C + T(n-3) = ... = nC + T(n-n) = nC + 0 = nC

By the definition of O, this equation is O(n). This example isn't particularly interesting, but if you look at something like the runtime of mergesort or another divide and conquer algorithm you can get a better idea of recurrence relations.

Foment answered 2/3, 2012 at 21:57 Comment(5)
Of course in this example you can common-sense it too: you're printing out every node in a linked list, so the number of prints you perform grows at exactly the same rate as the size of the list. So it's linear time.Foment
In this particular example, we can not assume the work done at each step is constant time, given how string concatenation in Java works.Earleanearleen
I think it's alright to make this assumption because the point of this problem isn't to look up the complexity of Java library functions, but to understand how this kind of recursive algorithm can be analyzed in general.Foment
I agree that formulating a recurrence is the key to solving this problem. But: we need to make sure we're solving the right recurrence. If we actually run this program and plot its behavior over ranges of n, we will observe O(n^2) time. That requires an explanation, or else our analysis is not useful. The recurrence must be amended to T(n) = C*n + T(n-1) because the primitive operation of string concatenation is linear in the size of the strings being concatenated. Unless the language provides string ropes, we have to deal with the non-constant cost of + on strings.Earleanearleen
Yeah, fair enough. Thanks for introducing me to string ropes, I hadn't heard of those before.Foment
A
0

The algorithm has run-time complexity of O(n) as you suggest. Your list has n items in it, and the algorithm will do a near-fixed amount of work for each item (that work being Element and Next access, plus a new toStringRec call). Retrieving an Element from a DNode takes constant time, and constant times are discarded in big-O notation.

The interesting thing about recursive methods (in most cases) is that they are also O(n) in space complexity too. A new stack entry (to store the parameters passed to the method) is created for each call to toStringRec, which is called n times.

Adrianaadriane answered 2/3, 2012 at 21:48 Comment(5)
This explanation unfortunately provides an incorrect conclusion. It doesn't take into account the cost of string concatenation. That is, the cost at each item is not constant. That's a major point of this problem. Please correct this.Earleanearleen
I agree that the cost of each item is not constant, but I do not agree that it is O(n). The cost of string1 + string2 is O(m), where m is the length of the resulting String. Specifically, concatenating two Strings is at worst the creation of a new char[] of length m and the one-at-a-time copying of each character from the original Strings. When on the n'th iteration of the code given, toStringRec may be returning a very long String, but the cost of concatenation is still O(m). m is not directly tied to n in this example, as getElement may return empty or very long strings.Adrianaadriane
Assume there is some length m that's an upper bound on the size of any particular d.getElement(). Then the size of the returned string we get back from a toStringRec(node) is bound by the length of the chain starting at node. Let T(n) be the cost of computation. Then: T(n) < C1 + C2 * m * (n-1) + C3 *T(n-1), for some constants C1,C2,C3. The middle term represents string concatenation. Let C4 be a constant bigger than C1 that's also a multiple of m * C2.Earleanearleen
Sorry, long proof. Had to split it. :) Anyway, so now we say T(n) < C1 + C2 * m * (n-1) + C3 * T(n-1) < C4 + C2 * m * (n-1) + C3 * T(n-1). Since we chose C4 to be a multiple of C2 * m, we factor. We can now say T(n) < C2 * m * (n + C) + C3 * T(n-1) for some C. The term C2 * m is also a constant. So we really have something like T(n) < C1 * n + C2 + C3 * T(n-1). Start unrolling the recurrence. We eventually get T(n) < C1 * (1 + 2 + ... + n) + C2 for some constants C1 and C2. (1 + 2 + ... + n) is well known as the Gauss sum, with closed form n(n+1)/2. There!Earleanearleen
Please ask questions if you're not convinced. Edit your answer accordingly once you are. :) Good luck!Earleanearleen
C
0

For such recursive algorithms, it's usually possible to write a recursive equation to calculate the order. It's customary to show number of instructions executed with T(n). In this example, we have:

T(n) = T(n - 1) + O(1)

(Supposing that the function getElement runs in constant time.) This equation's solution is trivially T(n) = O(n).

That was the general case. However, sometimes you can analyze the algorithm without writing such equation. In this example, you can easily argue that each element is accessed at most once, and each time some constant time work is done; so, it takes O(n) overall to do the job.

Canopy answered 4/3, 2012 at 18:34 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.