Why after pressing semicolon program is back in deep recursion?
Asked Answered
H

2

3

I'm trying to understand the semicolon functionality.

I have this code:

del(X,[X|Rest],Rest).
del(X,[Y|Tail],[Y|Rest]) :-
    del(X,Tail,Rest).

permutation([],[]).
permutation(L,[X|P]) :- del(X,L,L1), permutation(L1,P).

It's the simple predicate to show all permutations of given list.

I used the built-in graphical debugger in SWI-Prolog because I wanted to understand how it works and I understand for the first case which returns the list given in argument. Here is the diagram which I made for better understanding. debug diagram

But I don't get it for the another solution. When I press the semicolon it doesn't start in the place where it ended instead it's starting with some deep recursion where L=[] (like in step 9). I don't get it, didn't the recursion end earlier? It had to go out of the recursions to return the answer and after semicolon it's again deep in recursion.

Could someone clarify that to me? Thanks in advance.

Hexateuch answered 14/4, 2020 at 18:50 Comment(0)
J
1

One analogy that I find useful in demystifying Prolog is that Backtracking is like Nested Loops, and when the innermost loop's variables' values are all found, the looping is suspended, the vars' values are reported, and then the looping is resumed.

As an example, let's write down simple generate-and-test program to find all pairs of natural numbers above 0 that sum up to a prime number. Let's assume is_prime/1 is already given to us.

We write this in Prolog as

above(0, N), between(1, N, M), Sum is M+N, is_prime(Sum).

We write this in an imperative pseudocode as

for N from 1 step 1:
  for M from 1 step 1 until N:
     Sum := M+N
     if is_prime(Sum):
        report_to_user_and_ask(Sum)

Now when report_to_user_and_ask is called, it prints Sum out and asks the user whether to abort or to continue. The loops are not exited, on the contrary, they are just suspended. Thus all the loop variables values that got us this far -- and there may be more tests up the loops chain that sometimes succeed and sometimes fail -- are preserved, i.e. the computation state is preserved, and the computation is ready to be resumed from that point, if the user presses ;.

This is what's known as "inversion of control" as far as I understand, with report_to_user_and_ask() being used as a callback.

I first saw this approach in Peter Norvig's AI book's implementation of Prolog in Common Lisp. He used mapping (Common Lisp's mapcan which is concatMap in Haskell or flatMap in many other languages) as a looping construct though, and it took me years to see that nested loops is what it is really all about.

Goals conjunction is expressed as the nesting of the loops (akin to multiplication); goals disjunction is expressed as the alternatives to loop through (akin to summation):

    [ [a b]  X  [c d e] ]      -- 2D
    [  a     :   c d e
       b     :   c d e  ]      -- 2*3 = 6

    [ [a b]  +  [c d e] ]      -- 1D
    [  a b       c d e  ]      -- 2+3 = 5

Further twist is that the nested loops' structure isn't fixed from the outset. It is fluid, the nested loops of a given loop can be created depending on the current state of that loop, i.e. depending on the current alternative being explored there; the loops are written as we go. In (most of the) languages where such dynamic creation of nested loops is impossible, it can be encoded with nested recursion / function invocation / inside the loops. (Here's one example, with some pseudocode.)

If we keep all such loops (created for each of the alternatives) in memory even after they are finished with, what we get is the AND-OR tree (mentioned in the other answer) thus being created while the search space is being explored and the solutions are found.

(non-coincidentally this fluidity is also the essence of "monad"; nondeterminism is modeled by the list monad; and the essential operation of the list monad is the flatMap operation which we saw above. With fluid structure of loops it is "Monad"; with fixed structure it is "Applicative Functor"; simple loops with no structure (no nesting at all): simply "Functor" (the concepts used in Haskell and the like). Also helps to demystify those.)

So, the proper slogan could be Backtracking is like Nested Loops, either fixed, known from the outset, or dynamically-created as we go. It's a bit longer though. :)


Here's also a Prolog example, which "as if creates the code to be run first (N nested loops for a given value of N), and then runs it." (There's even a whole dedicated tag for it on SO, too, it turns out, .)

And here's one in Scheme ("creates nested loops with the solution being accessible in the innermost loop's body"), and a C++ example ("create n nested loops at run-time, in effect enumerating the binary encoding of 2n, and print the sums out from the innermost loop").

Jewett answered 14/4, 2020 at 19:50 Comment(7)
Backtracking is like Nested Loops this analogy only holds if the number of nestings is fixed. So for more complex programs, this is really misleading.Mesh
then how come actual Prolog(-ish) implementation was written that way? the key is that the nesting structure is not fixed; the loops are written as we go (nested recursion can be used for the actual encoding in (most) languages which can't write their own loops dynamically). that's what I meant by "fluidity", will edit some more to clarify. ---- fluid structure: monads; fixed structure: applicative Functors. no nesting: (simply) Functors (also helps to demystify those). all three concepts have their uses.Jewett
nested recursion in a loop (an example, where the number of nestings is dynamic, not fixed). --- I've edited the answer, thanks.Jewett
If you really insist on explaining the full mechanism in Prolog you would have to resort to iterators with yield plus recursion.Mesh
I am giving an analogy here.Jewett
...which is highly misleading, as it suggests that essentially the same mechanisms are present. But we have here two control flows intertwinedMesh
I don't know what's misleading there, it's a valid implementation technique. People use it. Norvig used it. (I've added more links to examples in various languages). also, googling gave this.Jewett
D
1

There is a big difference between recursion in functional/imperative programming languages and Prolog (and it really became clear to me only in the last 2 weeks or so):

In functional/imperative programming, you recurse down a call chain, then come back up, unwinding the stack, then output the result. It's over.

In Prolog, you recurse down an AND-OR tree (really, alternating AND and OR nodes), selecting a predicate to call on an OR node (the "choicepoint"), from left to right, and calling every predicate in turn on an AND node, also from left to right. An acceptable tree has exactly one predicate returning TRUE under each OR node, and all predicates returning TRUE under each AND node. Once an acceptable tree has been constructed, by the very search procedure, we are (i.e. the "search cursor" is) on a rightmost bottommost node .

Success in constructing an acceptable tree also means a solution to the query entered at the Prolog Toplevel (the REPL) has been found: The variable values are output, but the tree is kept (unless there are no choicepoints).

And this is also important: all variables are global in the sense that if a variable X as been passed all the way down the call chain from predicate to predicate to the rightmost bottommost node, then constrained at the last possible moment by unifying it with 2 for example, X = 2, then the Prolog Toplevel is aware of that without further ado: nothing needs to be passed up the call chain.

If you now press ;, search doesn't restart at the top of the tree, but at the bottom, i.e. at the current cursor position: the nearest parent OR node is asked for more solutions. This may result in much search until a new acceptable tree has been constructed, we are at a new rightmost bottommost node. The new variable values are output and you may again enter ;.

This process cycles until no acceptable tree can be constructed any longer, upon which false is output.

Note that having this AND-OR as an inspectable and modifiable data structure at runtime allows some magical tricks to be deployed.

There is bound to be a lot of power in debugging tools which record this tree to help the user who gets the dreaded sphynxian false from a Prolog program that is supposed to work. There are now Time Traveling Debuggers for functional and imperative languages, after all...

Danzig answered 14/4, 2020 at 19:18 Comment(0)
J
1

One analogy that I find useful in demystifying Prolog is that Backtracking is like Nested Loops, and when the innermost loop's variables' values are all found, the looping is suspended, the vars' values are reported, and then the looping is resumed.

As an example, let's write down simple generate-and-test program to find all pairs of natural numbers above 0 that sum up to a prime number. Let's assume is_prime/1 is already given to us.

We write this in Prolog as

above(0, N), between(1, N, M), Sum is M+N, is_prime(Sum).

We write this in an imperative pseudocode as

for N from 1 step 1:
  for M from 1 step 1 until N:
     Sum := M+N
     if is_prime(Sum):
        report_to_user_and_ask(Sum)

Now when report_to_user_and_ask is called, it prints Sum out and asks the user whether to abort or to continue. The loops are not exited, on the contrary, they are just suspended. Thus all the loop variables values that got us this far -- and there may be more tests up the loops chain that sometimes succeed and sometimes fail -- are preserved, i.e. the computation state is preserved, and the computation is ready to be resumed from that point, if the user presses ;.

This is what's known as "inversion of control" as far as I understand, with report_to_user_and_ask() being used as a callback.

I first saw this approach in Peter Norvig's AI book's implementation of Prolog in Common Lisp. He used mapping (Common Lisp's mapcan which is concatMap in Haskell or flatMap in many other languages) as a looping construct though, and it took me years to see that nested loops is what it is really all about.

Goals conjunction is expressed as the nesting of the loops (akin to multiplication); goals disjunction is expressed as the alternatives to loop through (akin to summation):

    [ [a b]  X  [c d e] ]      -- 2D
    [  a     :   c d e
       b     :   c d e  ]      -- 2*3 = 6

    [ [a b]  +  [c d e] ]      -- 1D
    [  a b       c d e  ]      -- 2+3 = 5

Further twist is that the nested loops' structure isn't fixed from the outset. It is fluid, the nested loops of a given loop can be created depending on the current state of that loop, i.e. depending on the current alternative being explored there; the loops are written as we go. In (most of the) languages where such dynamic creation of nested loops is impossible, it can be encoded with nested recursion / function invocation / inside the loops. (Here's one example, with some pseudocode.)

If we keep all such loops (created for each of the alternatives) in memory even after they are finished with, what we get is the AND-OR tree (mentioned in the other answer) thus being created while the search space is being explored and the solutions are found.

(non-coincidentally this fluidity is also the essence of "monad"; nondeterminism is modeled by the list monad; and the essential operation of the list monad is the flatMap operation which we saw above. With fluid structure of loops it is "Monad"; with fixed structure it is "Applicative Functor"; simple loops with no structure (no nesting at all): simply "Functor" (the concepts used in Haskell and the like). Also helps to demystify those.)

So, the proper slogan could be Backtracking is like Nested Loops, either fixed, known from the outset, or dynamically-created as we go. It's a bit longer though. :)


Here's also a Prolog example, which "as if creates the code to be run first (N nested loops for a given value of N), and then runs it." (There's even a whole dedicated tag for it on SO, too, it turns out, .)

And here's one in Scheme ("creates nested loops with the solution being accessible in the innermost loop's body"), and a C++ example ("create n nested loops at run-time, in effect enumerating the binary encoding of 2n, and print the sums out from the innermost loop").

Jewett answered 14/4, 2020 at 19:50 Comment(7)
Backtracking is like Nested Loops this analogy only holds if the number of nestings is fixed. So for more complex programs, this is really misleading.Mesh
then how come actual Prolog(-ish) implementation was written that way? the key is that the nesting structure is not fixed; the loops are written as we go (nested recursion can be used for the actual encoding in (most) languages which can't write their own loops dynamically). that's what I meant by "fluidity", will edit some more to clarify. ---- fluid structure: monads; fixed structure: applicative Functors. no nesting: (simply) Functors (also helps to demystify those). all three concepts have their uses.Jewett
nested recursion in a loop (an example, where the number of nestings is dynamic, not fixed). --- I've edited the answer, thanks.Jewett
If you really insist on explaining the full mechanism in Prolog you would have to resort to iterators with yield plus recursion.Mesh
I am giving an analogy here.Jewett
...which is highly misleading, as it suggests that essentially the same mechanisms are present. But we have here two control flows intertwinedMesh
I don't know what's misleading there, it's a valid implementation technique. People use it. Norvig used it. (I've added more links to examples in various languages). also, googling gave this.Jewett

© 2022 - 2024 — McMap. All rights reserved.