What is the definition of "natural recursion"?
Asked Answered
E

3

13

The Third Commandment of The Little Schemer states:

When building a list, describe the first typical element, and then cons it onto the natural recursion.

What is the exact definition of "natural recursion"? The reason why I am asking is because I am taking a class on programming language principles by Daniel Friedman and the following code is not considered "naturally recursive":

(define (plus x y)
    (if (zero? y) x
        (plus (add1 x) (sub1 y))))

However, the following code is considered "naturally recursive":

(define (plus x y)
    (if (zero? y) x
        (add1 (plus x (sub1 y)))))

I prefer the "unnaturally recursive" code because it is tail recursive. However, such code is considered anathema. When I asked as to why we shouldn't write the function in tail recursive form then the associate instructor simply replied, "You don't mess with the natural recursion."

What's the advantage of writing the function in the "naturally recursive" form?

Elective answered 27/8, 2015 at 22:26 Comment(8)
Perhaps natural recursion is a recursive process. Thus not actually about procedure recursion but the nature of the algorithm. Your first example is a recursive procedure but isn't a recursive process since the space is constant. It's actually an iterative process. An iterative procedure can implement a recursive process using an explicit data structure instead of system stack. Sometimes the recursive process in a recursive procedure can be easier to read.Bartholomew
"Such code is considered anathema". Did someone explicitely told you that or is this your impression?Defaulter
While I'm not sure whether it merits an answer or not, the authors may have been influenced by "primitive recursion" from computability theory. The key there is a that a function that accepts a "composite" argument (e.g., h(S(y))) would be defined by g(y,h(y)). When treating lists as a lists, the counterpart would be something more akin to h(cons(x,y)) = g(x,h(y)). That's not quite precise, but it may have been some influence.Jews
Part of the benefit is that primitive recursive functions always halt. It can be easier to prove that what Friedman is calling a "naturally recursive" function terminates, because it can be easier to show that each recursion moves toward the base case.Jews
@JoshuaTaylor I agree. It's good to know that your program will always halt. However, just because a function is mu-recursive doesn't mean that it's not also primitive recursive. There are a handful of functions (like the Ackermann function) which are only mu-recursive. However, quite a few primitive recursive functions are more efficient when written using mu-recursion. For example, the list reverse function is more efficient when written using an accumulator rather that when written using append, which is the naturally recursive style of writing programs. I think it's a teaching tool. =)Elective
@AaditMShah I entirely agree about the efficiency. My point was more about how easy or difficult it is to prove that something is a correct implemention. When we define reverse(x::xs) == append(reverse(xs),[x]), we get a very easy inductive proof, as long as we know that append works. We can see immediately that x is the last element, etc. The accumulator version is more efficient, but it might be a little bit harder (at least at an introductory level) to prove that it's correct. I'm entirely in favor of writing more efficient code; I'm just speculating on why the authors called...Jews
...the type of recursion that they are prescribing "natural recursion" (it could be from academic influence), and why they might prescribe it (it might be easier for less experienced programmers to reason about it).Jews
@Defaulter I just talked to one of my associate instructors. Tail recursive functions are not considered anathema. It's just that "naturally recursive" functions are easier to teach to new students.Elective
H
9

"Natural" (or just "Structural") recursion is the best way to start teaching students about recursion. This is because it has the wonderful guarantee that Joshua Taylor points out: it's guaranteed to terminate[*]. Students have a hard enough time wrapping their heads around this kind of program that making this a "rule" can save them a huge amount of head-against-wall-banging.

When you choose to leave the realm of structural recursion, you (the programmer) have taken on an additional responsibility, which is to ensure that your program halts on all inputs; it's one more thing to think about & prove.

In your case, it's a bit more subtle. You have two arguments, and you're making structurally recursive calls on the second one. In fact, with this observation (program is structurally recursive on argument 2), I would argue that your original program is pretty much just as legitimate as the non-tail-calling one, since it inherits the same proof-of-convergence. Ask Dan about this; I'd be interested to hear what he has to say.

[*] To be precise here you have to legislate out all kinds of other goofy stuff like calls to other functions that don't terminate, etc.

Highflown answered 28/8, 2015 at 17:30 Comment(3)
To be honest, I haven't met Dan yet. He's currently at ICFP and he'll be back after 2 weeks. Hence, I've only been interacting with associate instructors. They are really smart guys and it's fun talking to them. I've never been this excited about using Scheme before. However, my style of programming clashes with what they teach in class. I like writing tail-recursive functions and that's not allowed (for now). I like using named-let expressions and that's discouraged. Plus, they use vague terms like "natural recursion" and that's a little confusing. Overall, this is my favorite class ever. =')Elective
I asked my associate instructor about natural recursion and the reason why we use it is indeed because of the guarantee of halting. It's meant to teach students how to think recursively. This is what he said: i.sstatic.net/QhBcl.pngElective
@AaditMShah thank you very much for adding that response!Zany
D
5

The natural recursion has to do with the "natural", recursive definition of the type you are dealing with. Here, you are working with natural numbers; since "obviously" a natural number is either zero or the successor of another natural number, when you want to build a natural number, you naturally output 0 or (add1 z) for some other natural z which happens to be computed recursively.

The teacher probably wants you to make the link between recursive type definitions and recursive processing of values of that type. You would not have the kind of problem you have with numbers if you tried to process trees or lists, because you routinely use natural numbers in "unnatural ways" and thus, you might have natural objections thinking in terms of Church numerals.

The fact that you already know how to write tail-recursive functions is irrelevant in that context: this is apparently not the objective of your teacher to talk about tail-call optimizations, at least for now.

The associate instructor was not very helpful at first ("messing with natural recursion" sounds as "don't ask"), but the detailed explanation he/she gave in the snapshot you gave was more appropriate.

Defaulter answered 28/8, 2015 at 8:30 Comment(0)
R
1
(define (plus x y)
(if (zero? y) x
    (add1 (plus x (sub1 y)))))

When y != 0 it has to remember that once the result of (plus x (sub1 y)) is known, it has to compute add1 on it. Hence when y reaches zero, the recursion is at its deepest. Now the backtracking phase begins and the add1's are executed. This process can be observed using trace.

I did the trace for :

(require racket/trace)
(define (add1 x) ...)
(define (sub1 x) ...)
(define (plus x y) ...)
(trace plus)

(plus 2 3)

Here's the trace :

>(plus 2 3)
> (plus 2 2)
> >(plus 2 1)
> > (plus 2 0)  // Deepest point of recursion
< < 2           // Backtracking begins, performing add1 on the results
< <3
< 4
<5
5               // Result

The difference is that the other version has no backtracking phase. It is calling itself for a few times but it is iterative, because it is remembering intermediate results (passed as arguments). Hence the process is not consuming extra space.


Sometimes implementing a tail-recursive procedure is easier or more elegant then writing it's iterative equivalent. But for some purposes you can not/may not implement it in a recursive way.

PS : I had a class which was covering a bit about garbage collection algorithms. Such algorithms may not be recursive as there may be no space left, hence having no space for the recursion. I remember an algorithm called "Deutsch-Schorr-Waite" which was really hard to understand at first. First he implemented the recursive version just to understand the concept, afterwards he wrote the iterative version (hence manually having to remember from where in memory he came), believe me the recursive one was way easier but could not be used in practice...

Rentschler answered 27/8, 2015 at 23:53 Comment(1)
OP already knows the first version is in tail-recursive form, and I don't think you are answering the actual question.Defaulter

© 2022 - 2024 — McMap. All rights reserved.