What is recursion and when should I use it?
Asked Answered
Z

40

121

One of the topics that seems to come up regularly on mailing lists and online discussions is the merits (or lack thereof) of doing a Computer Science Degree. An argument that seems to come up time and again for the negative party is that they have been coding for some number of years and they have never used recursion.

So the question is:

  1. What is recursion?
  2. When would I use recursion?
  3. Why don't people use recursion?
Zeta answered 6/8, 2008 at 2:30 Comment(2)
And maybe this helps: #127256Dominus
This may help grasp the concept: navigate to the link provided on the second comment of the question on this page and do what the comments say to do: #3521Byelection
H
86

There are a number of good explanations of recursion in this thread, this answer is about why you shouldn't use it in most languages.* In the majority of major imperative language implementations (i.e. every major implementation of C, C++, Basic, Python, Ruby,Java, and C#) iteration is vastly preferable to recursion.

To see why, walk through the steps that the above languages use to call a function:

  1. space is carved out on the stack for the function's arguments and local variables
  2. the function's arguments are copied into this new space
  3. control jumps to the function
  4. the function's code runs
  5. the function's result is copied into a return value
  6. the stack is rewound to its previous position
  7. control jumps back to where the function was called

Doing all of these steps takes time, usually a little bit more than it takes to iterate through a loop. However, the real problem is in step #1. When many programs start, they allocate a single chunk of memory for their stack, and when they run out of that memory (often, but not always due to recursion), the program crashes due to a stack overflow.

So in these languages recursion is slower and it makes you vulnerable to crashing. There are still some arguments for using it though. In general, code written recursively is shorter and a bit more elegant, once you know how to read it.

There is a technique that language implementers can use called tail call optimization which can eliminate some classes of stack overflow. Put succinctly: if a function's return expression is simply the result of a function call, then you don't need to add a new level onto the stack, you can reuse the current one for the function being called. Regrettably, few imperative language-implementations have tail-call optimization built in.

* I love recursion. My favorite static language doesn't use loops at all, recursion is the only way to do something repeatedly. I just don't think that recursion is generally a good idea in languages that aren't tuned for it.

** By the way Mario, the typical name for your ArrangeString function is "join", and I'd be surprised if your language of choice doesn't already have an implementation of it.

Harleigh answered 6/8, 2008 at 2:30 Comment(3)
Good to see an explanation of the inherent overhead of recursion. I touched on that in my answer as well. But to me, the big strength with recursion is what you can do with the call stack. You can write a succinct algorithm with recursion which branches repeatedly, enabling you to do things like crawling hierarchies (parent/child relationships) with ease. See my answer for an example.Hendren
Very disappointed to find the top answer to a question titled "What is recursion and when should I use it?" not actually answer either of those, never mind the extremely bias warning against recursion, despite its widespread use in most of the languages you mentioned (there isn't anything specifically wrong about what you said, but you seem to be exaggerating the problem and underexaggerating the usefulness).Diageotropism
You're probably right @Dukeling. For context, when I wrote this answer there were many great explanations of recursion already written and I wrote this intending to be an adjunct to that info, not the top answer. In practice when I need to walk a tree or handle any other nested data structure I usually turn to recursion and I've yet to hit a stack overflow of my own making in the wild.Harleigh
M
63

Simple english example of recursion.

A child couldn't sleep, so her mother told her a story about a little frog,
    who couldn't sleep, so the frog's mother told her a story about a little bear,
         who couldn't sleep, so the bear's mother told her a story about a little weasel... 
            who fell asleep.
         ...and the little bear fell asleep;
    ...and the little frog fell asleep;
...and the child fell asleep.
Macao answered 6/8, 2008 at 2:30 Comment(2)
up+ for heart touching :)Volauvent
There is a similar story like this for little kids who won't fall asleep in Chinese folk tales, I just remembered that one, and it reminds me how real world recursion works.Briant
S
49

In the most basic computer science sense, recursion is a function that calls itself. Say you have a linked list structure:

struct Node {
    Node* next;
};

And you want to find out how long a linked list is you can do this with recursion:

int length(const Node* list) {
    if (!list->next) {
        return 1;
    } else {
        return 1 + length(list->next);
    }
}

(This could of course be done with a for loop as well, but is useful as an illustration of the concept)

Sinking answered 6/8, 2008 at 2:30 Comment(4)
@Christopher: This is a nice, simple example of recursion. Specifically this is an example of tail recursion. However, as Andreas stated, it can easily be rewritten (more efficiently) with a for loop. As I explain in my answer, there are better uses for recursion.Hendren
do u really need an else statement here?Acosta
No, it's only there for clarity.Sinking
@SteveWortham: This isn't tail-recursive as written; length(list->next) still needs to return to length(list) so the latter can add 1 to the result. Were it written to pass the length-so-far along, only then could we forget the caller existed. Like int length(const Node* list, int count=0) { return (!list) ? count : length(list->next, count + 1); }.Kipp
H
46

Whenever a function calls itself, creating a loop, then that's recursion. As with anything there are good uses and bad uses for recursion.

The most simple example is tail recursion where the very last line of the function is a call to itself:

int FloorByTen(int num)
{
    if (num % 10 == 0)
        return num;
    else
        return FloorByTen(num-1);
}

However, this is a lame, almost pointless example because it can easily be replaced by more efficient iteration. After all, recursion suffers from function call overhead, which in the example above could be substantial compared to the operation inside the function itself.

So the whole reason to do recursion rather than iteration should be to take advantage of the call stack to do some clever stuff. For example, if you call a function multiple times with different parameters inside the same loop then that's a way to accomplish branching. A classic example is the Sierpinski triangle.

enter image description here

You can draw one of those very simply with recursion, where the call stack branches in 3 directions:

private void BuildVertices(double x, double y, double len)
{
    if (len > 0.002)
    {
        mesh.Positions.Add(new Point3D(x, y + len, -len));
        mesh.Positions.Add(new Point3D(x - len, y - len, -len));
        mesh.Positions.Add(new Point3D(x + len, y - len, -len));
        len *= 0.5;
        BuildVertices(x, y + len, len);
        BuildVertices(x - len, y - len, len);
        BuildVertices(x + len, y - len, len);
    }
}

If you attempt to do the same thing with iteration I think you'll find it takes a lot more code to accomplish.

Other common use cases might include traversing hierarchies, e.g. website crawlers, directory comparisons, etc.

Conclusion

In practical terms, recursion makes the most sense whenever you need iterative branching.

Hendren answered 6/8, 2008 at 2:30 Comment(0)
A
27

Recursion is a method of solving problems based on the divide and conquer mentality. The basic idea is that you take the original problem and divide it into smaller (more easily solved) instances of itself, solve those smaller instances (usually by using the same algorithm again) and then reassemble them into the final solution.

The canonical example is a routine to generate the Factorial of n. The Factorial of n is calculated by multiplying all of the numbers between 1 and n. An iterative solution in C# looks like this:

public int Fact(int n)
{
  int fact = 1;

  for( int i = 2; i <= n; i++)
  {
    fact = fact * i;
  }

  return fact;
}

There's nothing surprising about the iterative solution and it should make sense to anyone familiar with C#.

The recursive solution is found by recognising that the nth Factorial is n * Fact(n-1). Or to put it another way, if you know what a particular Factorial number is you can calculate the next one. Here is the recursive solution in C#:

public int FactRec(int n)
{
  if( n < 2 )
  {
    return 1;
  }

  return n * FactRec( n - 1 );
}

The first part of this function is known as a Base Case (or sometimes Guard Clause) and is what prevents the algorithm from running forever. It just returns the value 1 whenever the function is called with a value of 1 or less. The second part is more interesting and is known as the Recursive Step. Here we call the same method with a slightly modified parameter (we decrement it by 1) and then multiply the result with our copy of n.

When first encountered this can be kind of confusing so it's instructive to examine how it works when run. Imagine that we call FactRec(5). We enter the routine, are not picked up by the base case and so we end up like this:

// In FactRec(5)
return 5 * FactRec( 5 - 1 );

// which is
return 5 * FactRec(4);

If we re-enter the method with the parameter 4 we are again not stopped by the guard clause and so we end up at:

// In FactRec(4)
return 4 * FactRec(3);

If we substitute this return value into the return value above we get

// In FactRec(5)
return 5 * (4 * FactRec(3));

This should give you a clue as to how the final solution is arrived at so we'll fast track and show each step on the way down:

return 5 * (4 * FactRec(3));
return 5 * (4 * (3 * FactRec(2)));
return 5 * (4 * (3 * (2 * FactRec(1))));
return 5 * (4 * (3 * (2 * (1))));

That final substitution happens when the base case is triggered. At this point we have a simple algrebraic formula to solve which equates directly to the definition of Factorials in the first place.

It's instructive to note that every call into the method results in either a base case being triggered or a call to the same method where the parameters are closer to a base case (often called a recursive call). If this is not the case then the method will run forever.

Aberdare answered 6/8, 2008 at 2:30 Comment(2)
Good explanation, but I think it's important to note that this is simply tail recursion and offers no advantage over the iterative solution. It's roughly the same amount of code, and will run slower due to the function call overhead.Hendren
@SteveWortham: This isn't tail recursion. In the recursive step, the result of FactRec() has to be multiplied by n before returning.Petrochemical
S
12

Recursion is solving a problem with a function that calls itself. A good example of this is a factorial function. Factorial is a math problem where factorial of 5, for example, is 5 * 4 * 3 * 2 * 1. This function solves this in C# for positive integers (not tested - there may be a bug).

public int Factorial(int n)
{
    if (n <= 1)
        return 1;

    return n * Factorial(n - 1);
}
Sharma answered 6/8, 2008 at 2:30 Comment(0)
D
9

Consider an old, well known problem:

In mathematics, the greatest common divisor (gcd) … of two or more non-zero integers, is the largest positive integer that divides the numbers without a remainder.

The definition of gcd is surprisingly simple:

gcd definition

where mod is the modulo operator (that is, the remainder after integer division).

In English, this definition says the greatest common divisor of any number and zero is that number, and the greatest common divisor of two numbers m and n is the greatest common divisor of n and the remainder after dividing m by n.

If you'd like to know why this works, see the Wikipedia article on the Euclidean algorithm.

Let's compute gcd(10, 8) as an example. Each step is equal to the one just before it:

  1. gcd(10, 8)
  2. gcd(10, 10 mod 8)
  3. gcd(8, 2)
  4. gcd(8, 8 mod 2)
  5. gcd(2, 0)
  6. 2

In the first step, 8 does not equal zero, so the second part of the definition applies. 10 mod 8 = 2 because 8 goes into 10 once with a remainder of 2. At step 3, the second part applies again, but this time 8 mod 2 = 0 because 2 divides 8 with no remainder. At step 5, the second argument is 0, so the answer is 2.

Did you notice that gcd appears on both the left and right sides of the equals sign? A mathematician would say this definition is recursive because the expression you're defining recurs inside its definition.

Recursive definitions tend to be elegant. For example, a recursive definition for the sum of a list is

sum l =
    if empty(l)
        return 0
    else
        return head(l) + sum(tail(l))

where head is the first element in a list and tail is the rest of the list. Note that sum recurs inside its definition at the end.

Maybe you'd prefer the maximum value in a list instead:

max l =
    if empty(l)
        error
    elsif length(l) = 1
        return head(l)
    else
        tailmax = max(tail(l))
        if head(l) > tailmax
            return head(l)
        else
            return tailmax

You might define multiplication of non-negative integers recursively to turn it into a series of additions:

a * b =
    if b = 0
        return 0
    else
        return a + (a * (b - 1))

If that bit about transforming multiplication into a series of additions doesn't make sense, try expanding a few simple examples to see how it works.

Merge sort has a lovely recursive definition:

sort(l) =
    if empty(l) or length(l) = 1
        return l
    else
        (left,right) = split l
        return merge(sort(left), sort(right))

Recursive definitions are all around if you know what to look for. Notice how all of these definitions have very simple base cases, e.g., gcd(m, 0) = m. The recursive cases whittle away at the problem to get down to the easy answers.

With this understanding, you can now appreciate the other algorithms in Wikipedia's article on recursion!

Diarrhea answered 6/8, 2008 at 2:30 Comment(0)
F
9

Recursion refers to a method which solves a problem by solving a smaller version of the problem and then using that result plus some other computation to formulate the answer to the original problem. Often times, in the process of solving the smaller version, the method will solve a yet smaller version of the problem, and so on, until it reaches a "base case" which is trivial to solve.

For instance, to calculate a factorial for the number X, one can represent it as X times the factorial of X-1. Thus, the method "recurses" to find the factorial of X-1, and then multiplies whatever it got by X to give a final answer. Of course, to find the factorial of X-1, it'll first calculate the factorial of X-2, and so on. The base case would be when X is 0 or 1, in which case it knows to return 1 since 0! = 1! = 1.

Forsythe answered 6/8, 2008 at 2:30 Comment(5)
I think what you are refereing to is not recursion but the <a href="en.wikipedia.org/wiki/… and Conquer</a> algorithm design principle. Look for example at the <a href="en.wikipedia.org/wiki/Ackermann_function">Ackermans function</a>.Jair
No, I'm not referring to D&C. D&C implies that 2 or more subproblems exist, recursion by itself does not (for instance, the factorial example given here is not D&C - it's completely linear). D&C is essentially a subset of recursion.Forsythe
Quoted from the exact article you linked: "A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type,"Forsythe
I don't think that is a great explanation, since recursion strictly speaking does not have to solve problem at all. You could just call yourself(And overflow).Medin
I am using your explanation in an article I am writing for PHP Master though I can't attribute it to you. Hope you don't mind.Hindman
T
8
  1. A function that calls itself
  2. When a function can be (easily) decomposed into a simple operation plus the same function on some smaller portion of the problem. I should say, rather, that this makes it a good candidate for recursion.
  3. They do!

The canonical example is the factorial which looks like:

int fact(int a) 
{
  if(a==1)
    return 1;

  return a*fact(a-1);
}

In general, recursion isn't necessarily fast (function call overhead tends to be high because recursive functions tend to be small, see above) and can suffer from some problems (stack overflow anyone?). Some say they tend to be hard to get 'right' in non-trivial cases but I don't really buy into that. In some situations, recursion makes the most sense and is the most elegant and clear way to write a particular function. It should be noted that some languages favor recursive solutions and optimize them much more (LISP comes to mind).

Thomasina answered 6/8, 2008 at 2:30 Comment(0)
T
6

A recursive function is one which calls itself. The most common reason I've found to use it is traversing a tree structure. For example, if I have a TreeView with checkboxes (think installation of a new program, "choose features to install" page), I might want a "check all" button which would be something like this (pseudocode):

function cmdCheckAllClick {
    checkRecursively(TreeView1.RootNode);
}

function checkRecursively(Node n) {
    n.Checked = True;
    foreach ( n.Children as child ) {
        checkRecursively(child);
    }
}

So you can see that the checkRecursively first checks the node which it is passed, then calls itself for each of that node's children.

You do need to be a bit careful with recursion. If you get into an infinite recursive loop, you will get a Stack Overflow exception :)

I can't think of a reason why people shouldn't use it, when appropriate. It is useful in some circumstances, and not in others.

I think that because it's an interesting technique, some coders perhaps end up using it more often than they should, without real justification. This has given recursion a bad name in some circles.

Tristantristas answered 6/8, 2008 at 2:30 Comment(0)
S
5

Recursion is an expression directly or indirectly referencing itself.

Consider recursive acronyms as a simple example:

  • GNU stands for GNU's Not Unix
  • PHP stands for PHP: Hypertext Preprocessor
  • YAML stands for YAML Ain't Markup Language
  • WINE stands for Wine Is Not an Emulator
  • VISA stands for Visa International Service Association

More examples on Wikipedia

Sisson answered 6/8, 2008 at 2:30 Comment(0)
S
4

1.) A method is recursive if it can call itself; either directly:

void f() {
   ... f() ... 
}

or indirectly:

void f() {
    ... g() ...
}

void g() {
   ... f() ...
}

2.) When to use recursion

Q: Does using recursion usually make your code faster? 
A: No.
Q: Does using recursion usually use less memory? 
A: No.
Q: Then why use recursion? 
A: It sometimes makes your code much simpler!

3.) People use recursion only when it is very complex to write iterative code. For example, tree traversal techniques like preorder, postorder can be made both iterative and recursive. But usually we use recursive because of its simplicity.

Sanmiguel answered 6/8, 2008 at 2:30 Comment(1)
What about reducing complexity when divide and conquer concerning perfs ?Maiolica
S
4

A recursive statement is one in which you define the process of what to do next as a combination of the inputs and what you have already done.

For example, take factorial:

factorial(6) = 6*5*4*3*2*1

But it's easy to see factorial(6) also is:

6 * factorial(5) = 6*(5*4*3*2*1).

So generally:

factorial(n) = n*factorial(n-1)

Of course, the tricky thing about recursion is that if you want to define things in terms of what you have already done, there needs to be some place to start.

In this example, we just make a special case by defining factorial(1) = 1.

Now we see it from the bottom up:

factorial(6) = 6*factorial(5)
                   = 6*5*factorial(4)
                   = 6*5*4*factorial(3) = 6*5*4*3*factorial(2) = 6*5*4*3*2*factorial(1) = 6*5*4*3*2*1

Since we defined factorial(1) = 1, we reach the "bottom".

Generally speaking, recursive procedures have two parts:

1) The recursive part, which defines some procedure in terms of new inputs combined with what you've "already done" via the same procedure. (i.e. factorial(n) = n*factorial(n-1))

2) A base part, which makes sure that the process doesn't repeat forever by giving it some place to start (i.e. factorial(1) = 1)

It can be a bit confusing to get your head around at first, but just look at a bunch of examples and it should all come together. If you want a much deeper understanding of the concept, study mathematical induction. Also, be aware that some languages optimize for recursive calls while others do not. It's pretty easy to make insanely slow recursive functions if you're not careful, but there are also techniques to make them performant in most cases.

Hope this helps...

Sinatra answered 6/8, 2008 at 2:30 Comment(0)
B
4

I like this definition:
In recursion, a routine solves a small part of a problem itself, divides the problem into smaller pieces, and then calls itself to solve each of the smaller pieces.

I also like Steve McConnells discussion of recursion in Code Complete where he criticises the examples used in Computer Science books on Recursion.

Don't use recursion for factorials or Fibonacci numbers

One problem with computer-science textbooks is that they present silly examples of recursion. The typical examples are computing a factorial or computing a Fibonacci sequence. Recursion is a powerful tool, and it's really dumb to use it in either of those cases. If a programmer who worked for me used recursion to compute a factorial, I'd hire someone else.

I thought this was a very interesting point to raise and may be a reason why recursion is often misunderstood.

EDIT: This was not a dig at Dav's answer - I had not seen that reply when I posted this

Broider answered 6/8, 2008 at 2:30 Comment(2)
Most of the reason why factorials or fibonacci sequences are used as examples is because they're common items that are defined in a recursive manner, and thus they lend themselves naturally to examples of recursion to calculate them - even if that's not actually the best method from a CS standpoint.Forsythe
I agree - I just found when I was reading the book that it was an interesting point to raise in the middle of a section on recursionEconomically
E
4

Here's a simple example: how many elements in a set. (there are better ways to count things, but this is a nice simple recursive example.)

First, we need two rules:

  1. if the set is empty, the count of items in the set is zero (duh!).
  2. if the set is not empty, the count is one plus the number of items in the set after one item is removed.

Suppose you have a set like this: [x x x]. let's count how many items there are.

  1. the set is [x x x] which is not empty, so we apply rule 2. the number of items is one plus the number of items in [x x] (i.e. we removed an item).
  2. the set is [x x], so we apply rule 2 again: one + number of items in [x].
  3. the set is [x], which still matches rule 2: one + number of items in [].
  4. Now the set is [], which matches rule 1: the count is zero!
  5. Now that we know the answer in step 4 (0), we can solve step 3 (1 + 0)
  6. Likewise, now that we know the answer in step 3 (1), we can solve step 2 (1 + 1)
  7. And finally now that we know the answer in step 2 (2), we can solve step 1 (1 + 2) and get the count of items in [x x x], which is 3. Hooray!

We can represent this as:

count of [x x x] = 1 + count of [x x]
                 = 1 + (1 + count of [x])
                 = 1 + (1 + (1 + count of []))
                 = 1 + (1 + (1 + 0)))
                 = 1 + (1 + (1))
                 = 1 + (2)
                 = 3

When applying a recursive solution, you usually have at least 2 rules:

  • the basis, the simple case which states what happens when you have "used up" all of your data. This is usually some variation of "if you are out of data to process, your answer is X"
  • the recursive rule, which states what happens if you still have data. This is usually some kind of rule that says "do something to make your data set smaller, and reapply your rules to the smaller data set."

If we translate the above to pseudocode, we get:

numberOfItems(set)
    if set is empty
        return 0
    else
        remove 1 item from set
        return 1 + numberOfItems(set)

There's a lot more useful examples (traversing a tree, for example) which I'm sure other people will cover.

Eaten answered 6/8, 2008 at 2:30 Comment(0)
A
4

Recursion works best with what I like to call "fractal problems", where you're dealing with a big thing that's made of smaller versions of that big thing, each of which is an even smaller version of the big thing, and so on. If you ever have to traverse or search through something like a tree or nested identical structures, you've got a problem that might be a good candidate for recursion.

People avoid recursion for a number of reasons:

  1. Most people (myself included) cut their programming teeth on procedural or object-oriented programming as opposed to functional programming. To such people, the iterative approach (typically using loops) feels more natural.

  2. Those of us who cut our programming teeth on procedural or object-oriented programming have often been told to avoid recursion because it's error prone.

  3. We're often told that recursion is slow. Calling and returning from a routine repeatedly involves a lot of stack pushing and popping, which is slower than looping. I think some languages handle this better than others, and those languages are most likely not those where the dominant paradigm is procedural or object-oriented.

  4. For at least a couple of programming languages I've used, I remember hearing recommendations not to use recursion if it gets beyond a certain depth because its stack isn't that deep.

Abbreviate answered 6/8, 2008 at 2:30 Comment(0)
S
3

An example: A recursive definition of a staircase is: A staircase consists of: - a single step and a staircase (recursion) - or only a single step (termination)

Squat answered 6/8, 2008 at 2:30 Comment(0)
A
3

Well, that's a pretty decent definition you have. And wikipedia has a good definition too. So I'll add another (probably worse) definition for you.

When people refer to "recursion", they're usually talking about a function they've written which calls itself repeatedly until it is done with its work. Recursion can be helpful when traversing hierarchies in data structures.

Arrhenius answered 6/8, 2008 at 2:30 Comment(0)
P
2

This is an old question, but I want to add an answer from logistical point of view (i.e not from algorithm correctness point of view or performance point of view).

I use Java for work, and Java doesn't support nested function. As such, if I want to do recursion, I might have to define an external function (which exists only because my code bumps against Java's bureaucratic rule), or I might have to refactor the code altogether (which I really hate to do).

Thus, I often avoid recursion, and use stack operation instead, because recursion itself is essentially a stack operation.

Panjabi answered 6/8, 2008 at 2:30 Comment(0)
N
2

A recursive function is a function that contains a call to itself. A recursive struct is a struct that contains an instance of itself. You can combine the two as a recursive class. The key part of a recursive item is that it contains an instance/call of itself.

Consider two mirrors facing each other. We've seen the neat infinity effect they make. Each reflection is an instance of a mirror, which is contained within another instance of a mirror, etc. The mirror containing a reflection of itself is recursion.

A binary search tree is a good programming example of recursion. The structure is recursive with each Node containing 2 instances of a Node. Functions to work on a binary search tree are also recursive.

Needs answered 6/8, 2008 at 2:30 Comment(0)
A
2

In plain English: Assume you can do 3 things:

  1. Take one apple
  2. Write down tally marks
  3. Count tally marks

You have a lot of apples in front of you on a table and you want to know how many apples there are.

start
  Is the table empty?
  yes: Count the tally marks and cheer like it's your birthday!
  no:  Take 1 apple and put it aside
       Write down a tally mark
       goto start

The process of repeating the same thing till you are done is called recursion.

I hope this is the "plain english" answer you are looking for!

Agitation answered 6/8, 2008 at 2:30 Comment(2)
Wait, i have a lot of tally marks in front of me on a table, and now i want to know how many tally marks there are. Can i somehow use the apples for this?Shevat
If you take an apple from the ground (when you've put them there during the process) and place it on the table each time you scratch one tally mark of the list until there are no tally marks left, i'm pretty sure you end up with an amount of apples on the table equal to the number of tally marks you had. Now just count those apples for instant success! (note: this process is no longer recursion, but an infinite loop)Agitation
H
2

To recurse on a solved problem: do nothing, you're done.
To recurse on an open problem: do the next step, then recurse on the rest.

Harleigh answered 6/8, 2008 at 2:30 Comment(0)
H
1

The simplest definition of recursion is "self-reference". A function that refers to itself, i. e. calls itself is recursive. The most important thing to keep in mind, is that a recursive function must have a "base case", i. e. a condition that if true causes it not to call itself, and thus terminate the recursion. Otherwise you will have infinite recursion:

recursion http://cart.kolix.de/wp-content/uploads/2009/12/infinite-recursion.jpg

Hildegardehildesheim answered 6/8, 2008 at 2:30 Comment(1)
Let's not forget the concept of mutual recursion, where one function calls another which, in turn, calls the first. (But that, of course, is going beyond the scope of the original question.)Impeachment
L
1

Recursion is when you have an operation that uses itself. It probably will have a stopping point, otherwise it would go on forever.

Let's say you want to look up a word in the dictionary. You have an operation called "look-up" at your disposal.

Your friend says "I could really spoon up some pudding right now!" You don't know what he means, so you look up "spoon" in the dictionary, and it reads something like this:

Spoon: noun - a utensil with a round scoop at the end. Spoon: verb - to use a spoon on something Spoon: verb - to cuddle closely from behind

Now, being that you're really not good with English, this points you in the right direction, but you need more info. So you select "utensil" and "cuddle" to look up for some more information.

Cuddle: verb - to snuggle Utensil: noun - a tool, often an eating utensil

Hey! You know what snuggling is, and it has nothing to do with pudding. You also know that pudding is something you eat, so it makes sense now. Your friend must want to eat pudding with a spoon.

Okay, okay, this was a very lame example, but it illustrates (perhaps poorly) the two main parts of recursion. 1) It uses itself. In this example, you haven't really looked up a word meaningfully until you understand it, and that might mean looking up more words. This brings us to point two, 2) It stops somewhere. It has to have some kind of base-case. Otherwise, you'd just end up looking up every word in the dictionary, which probably isn't too useful. Our base-case was that you got enough information to make a connection between what you previously did and did not understand.

The traditional example that's given is factorial, where 5 factorial is 1*2*3*4*5 (which is 120). The base case would be 0 (or 1, depending). So, for any whole number n, you do the following

is n equal to 0? return 1 otherwise, return n * (factorial of n-1)

let's do this with the example of 4 (which we know ahead of time is 1*2*3*4 = 24).

factorial of 4 ... is it 0? no, so it must be 4 * factorial of 3 but what's factorial of 3? it's 3 * factorial of 2 factorial of 2 is 2 * factorial of 1 factorial of 1 is 1 * factorial of 0 and we KNOW factorial of 0! :-D it's 1, that's the definition factorial of 1 is 1 * factorial of 0, which was 1... so 1*1 = 1 factorial of 2 is 2 * factorial of 1, which was 1... so 2*1 = 2 factorial of 3 is 3 * factorial of 2, which was 2... so 3*2 = 6 factorial of 4 (finally!!) is 4 * factorial of 3, which was 6... 4*6 is 24

Factorial is a simple case of "base case, and uses itself".

Now, notice we were still working on factorial of 4 the entire way down... If we wanted factorial of 100, we'd have to go all the way down to 0... which might have a lot of overhead to it. In the same manner, if we find an obscure word to look up in the dictionary, it might take looking up other words and scanning for context clues until we find a connection we're familiar with. Recursive methods can take a long time to work their way through. However, when they're used correctly, and understood, they can make complicated work surprisingly simple.

Lazor answered 6/8, 2008 at 2:30 Comment(0)
G
1

A great many problems can be thought of in two types of pieces:

  1. Base cases, which are elementary things that you can solve by just looking at them, and
  2. Recursive cases, which build a bigger problem out of smaller pieces (elementary or otherwise).

So what's a recursive function? Well, that's where you have a function that is defined in terms of itself, directly or indirectly. OK, that sounds ridiculous until you realize that it is sensible for the problems of the kind described above: you solve the base cases directly and deal with the recursive cases by using recursive calls to solve the smaller pieces of the problem embedded within.

The truly classic example of where you need recursion (or something that smells very much like it) is when you're dealing with a tree. The leaves of the tree are the base case, and the branches are the recursive case. (In pseudo-C.)

struct Tree {
    int leaf;
    Tree *leftBranch;
    Tree *rightBranch;
};

The simplest way of printing this out in order is to use recursion:

function printTreeInOrder(Tree *tree) {
    if (tree->leftBranch) {
        printTreeInOrder(tree->leftBranch);
    }
    print(tree->leaf);
    if (tree->rightBranch) {
        printTreeInOrder(tree->rightBranch);
    }
}

It's dead easy to see that that's going to work, since it's crystal clear. (The non-recursive equivalent is quite a lot more complex, requiring a stack structure internally to manage the list of things to process.) Well, assuming that nobody's done a circular connection of course.

Mathematically, the trick to showing that recursion is tamed is to focus on finding a metric for the size of the arguments. For our tree example, the easiest metric is the maximum depth of the tree below the current node. At leaves, it's zero. At a branch with only leaves below it, it's one, etc. Then you can simply show that there's strictly ordered sequence on the size of the arguments that the function is invoked on in order to process the tree; the arguments to the recursive calls are always "lesser" in the sense of the metric than the argument to the overall call. With a strictly decreasing cardinal metric, you're sorted.

It's also possible to have infinite recursion. That's messy and in many languages won't work because the stack blows up. (Where it does work, the language engine must be determining that the function somehow doesn't return and is able therefore to optimize away the keeping of the stack. Tricky stuff in general; tail-recursion is just the most trivial way of doing this.)

Gonzalez answered 6/8, 2008 at 2:30 Comment(0)
V
1

Recursion is technique of defining a function, a set or an algorithm in terms of itself.

For example

n! = n(n-1)(n-2)(n-3)...........*3*2*1

Now it can be defined recursively as:-

n! = n(n-1)!   for n>=1

In programming terms, when a function or method calls itself repeatedly, until some specific condition gets satisfied, this process is called Recursion. But there must be a terminating condition and function or method must no enter into an infinite loop.

Versicular answered 6/8, 2008 at 2:30 Comment(0)
P
1

Recursion in computing is a technique used to compute a result or side effect following the normal return from a single function (method, procedure or block) invocation.

The recursive function, by definition must have the ability to invoke itself either directly or indirectly (through other functions) depending on an exit condition or conditions not being met. If an exit condition is met the particular invocation returns to it's caller. This continues until the initial invocation is returned from, at which time the desired result or side effect will be available.

As an example, here's a function to perform the Quicksort algorithm in Scala (copied from the Wikipedia entry for Scala)

def qsort: List[Int] => List[Int] = {
  case Nil => Nil
  case pivot :: tail =>
    val (smaller, rest) = tail.partition(_ < pivot)
    qsort(smaller) ::: pivot :: qsort(rest)
}

In this case the exit condition is an empty list.

Pliers answered 6/8, 2008 at 2:30 Comment(0)
L
1

function call itself or use its own definition.

Luminiferous answered 6/8, 2008 at 2:30 Comment(0)
N
1

Any algorithm exhibits structural recursion on a datatype if basically consists of a switch-statement with a case for each case of the datatype.

for example, when you are working on a type

  tree = null 
       | leaf(value:integer) 
       | node(left: tree, right:tree)

a structural recursive algorithm would have the form

 function computeSomething(x : tree) =
   if x is null: base case
   if x is leaf: do something with x.value
   if x is node: do something with x.left,
                 do something with x.right,
                 combine the results

this is really the most obvious way to write any algorith that works on a data structure.

now, when you look at the integers (well, the natural numbers) as defined using the Peano axioms

 integer = 0 | succ(integer)

you see that a structural recursive algorithm on integers looks like this

 function computeSomething(x : integer) =
   if x is 0 : base case
   if x is succ(prev) : do something with prev

the too-well-known factorial function is about the most trivial example of this form.

Negrete answered 6/8, 2008 at 2:30 Comment(0)
C
1

hey, sorry if my opinion agrees with someone, I'm just trying to explain recursion in plain english.

suppose you have three managers - Jack, John and Morgan. Jack manages 2 programmers, John - 3, and Morgan - 5. you are going to give every manager 300$ and want to know what would it cost. The answer is obvious - but what if 2 of Morgan-s employees are also managers?

HERE comes the recursion. you start from the top of the hierarchy. the summery cost is 0$. you start with Jack, Then check if he has any managers as employees. if you find any of them are, check if they have any managers as employees and so on. Add 300$ to the summery cost every time you find a manager. when you are finished with Jack, go to John, his employees and then to Morgan.

You'll never know, how much cycles will you go before getting an answer, though you know how many managers you have and how many Budget can you spend.

Recursion is a tree, with branches and leaves, called parents and children respectively. When you use a recursion algorithm, you more or less consciously are building a tree from the data.

Clipclop answered 6/8, 2008 at 2:30 Comment(0)
M
1

In plain English, recursion means to repeat someting again and again.

In programming one example is of calling the function within itself .

Look on the following example of calculating factorial of a number:

public int fact(int n)
{
    if (n==0) return 1;
    else return n*fact(n-1)
}
Mendelian answered 6/8, 2008 at 2:30 Comment(1)
In plain English, to repeat something again and again is called iteration.Owl
M
1

Recursion is the process where a method call iself to be able to perform a certain task. It reduces redundency of code. Most recurssive functions or methods must have a condifiton to break the recussive call i.e. stop it from calling itself if a condition is met - this prevents the creating of an infinite loop. Not all functions are suited to be used recursively.

Matlock answered 6/8, 2008 at 2:30 Comment(0)
R
1

its a way to do things over and over indefinitely such that every option is used.

for example if you wanted to get all the links on an html page you will want to have recursions because when you get all the links on page 1 you will want to get all the links on each of the links found on the first page. then for each link to a newpage you will want those links and so on... in other words it is a function that calls itself from inside itself.

when you do this you need a way to know when to stop or else you will be in an endless loop so you add an integer param to the function to track the number of cycles.

in c# you will have something like this:

private void findlinks(string URL, int reccursiveCycleNumb)    {
   if (reccursiveCycleNumb == 0)
        {
            return;
        }

        //recursive action here
        foreach (LinkItem i in LinkFinder.Find(URL))
        {
            //see what links are being caught...
            lblResults.Text += i.Href + "<BR>";

            findlinks(i.Href, reccursiveCycleNumb - 1);
        }

        reccursiveCycleNumb -= reccursiveCycleNumb;
}
Rauwolfia answered 6/8, 2008 at 2:30 Comment(0)
P
1

"If I have a hammer, make everything look like a nail."

Recursion is a problem-solving strategy for huge problems, where at every step just, "turn 2 small things into one bigger thing," each time with the same hammer.

Example

Suppose your desk is covered with a disorganized mess of 1024 papers. How do you make one neat, clean stack of papers from the mess, using recursion?

  1. Divide: Spread all the sheets out, so you have just one sheet in each "stack".
  2. Conquer:
    1. Go around, putting each sheet on top of one other sheet. You now have stacks of 2.
    2. Go around, putting each 2-stack on top of another 2-stack. You now have stacks of 4.
    3. Go around, putting each 4-stack on top of another 4-stack. You now have stacks of 8.
    4. ... on and on ...
    5. You now have one huge stack of 1024 sheets!

Notice that this is pretty intuitive, aside from counting everything (which isn't strictly necessary). You might not go all the way down to 1-sheet stacks, in reality, but you could and it would still work. The important part is the hammer: With your arms, you can always put one stack on top of the other to make a bigger stack, and it doesn't matter (within reason) how big either stack is.

Polinski answered 6/8, 2008 at 2:30 Comment(2)
You’re describing divide and conquer. While this is an example of recursion, it’s by no means the only one.Hax
That's fine. I'm not trying to capture [the world of recursion][1] in a sentence, here. I want an intuitive explanation. [1]: facebook.com/pages/Recursion-Fairy/269711978049Polinski
V
1

Recursion as it applies to programming is basically calling a function from inside its own definition (inside itself), with different parameters so as to accomplish a task.

Vanlandingham answered 6/8, 2008 at 2:30 Comment(0)
O
1

You want to use it anytime you have a tree structure. It is very useful in reading XML.

Obvolute answered 6/8, 2008 at 2:30 Comment(0)
F
0

Actually the better recursive solution for factorial should be:

int factorial_accumulate(int n, int accum) {
    return (n < 2 ? accum : factorial_accumulate(n - 1, n * accum));
}

int factorial(int n) {
    return factorial_accumulate(n, 1);
}

Because this version is Tail Recursive

Firooc answered 6/8, 2008 at 2:30 Comment(0)
T
0

Mario, I don't understand why you used recursion for that example.. Why not simply loop through each entry? Something like this:

String ArrangeString(TStringList* items, String separator)
{
    String result = items->Strings[0];

    for (int position=1; position < items->count; position++) {
        result += separator + items->Strings[position];
    }

    return result;
}

The above method would be faster, and is simpler. There's no need to use recursion in place of a simple loop. I think these sorts of examples is why recursion gets a bad rap. Even the canonical factorial function example is better implemented with a loop.

Tristantristas answered 6/8, 2008 at 2:30 Comment(0)
M
0

I use recursion. What does that have to do with having a CS degree... (which I don't, by the way)

Common uses I have found:

  1. sitemaps - recurse through filesystem starting at document root
  2. spiders - crawling through a website to find email address, links, etc.
  3. ?
Muscle answered 6/8, 2008 at 2:30 Comment(0)
S
0

I have created a recursive function to concatenate a list of strings with a separator between them. I use it mostly to create SQL expressions, by passing a list of fields as the 'items' and a 'comma+space' as the separator. Here's the function (It uses some Borland Builder native data types, but can be adapted to fit any other environment):

String ArrangeString(TStringList* items, int position, String separator)
{
  String result;

  result = items->Strings[position];

  if (position <= items->Count)
    result += separator + ArrangeString(items, position + 1, separator);

  return result;
}

I call it this way:

String columnsList;
columnsList = ArrangeString(columns, 0, ", ");

Imagine you have an array named 'fields' with this data inside it: 'albumName', 'releaseDate', 'labelId'. Then you call the function:

ArrangeString(fields, 0, ", ");

As the function starts to work, the variable 'result' receives the value of the position 0 of the array, which is 'albumName'.

Then it checks if the position it's dealing with is the last one. As it isn't, then it concatenates the result with the separator and the result of a function, which, oh God, is this same function. But this time, check it out, it call itself adding 1 to the position.

ArrangeString(fields, 1, ", ");

It keeps repeating, creating a LIFO pile, until it reaches a point where the position being dealt with IS the last one, so the function returns only the item on that position on the list, not concatenating anymore. Then the pile is concatenated backwards.

Got it? If you don't, I have another way to explain it. :o)

Spadix answered 6/8, 2008 at 2:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.