How does ArrowLoop work? Also, mfix?
Asked Answered
S

1

32

I'm fairly comfortable now with the rest of the arrow machinery, but I don't get how loop works. It seems magical to me, and that's bad for my understanding. I also have trouble understanding mfix. When I look at a piece of code that uses rec in a proc or do block, I get confused. With regular monadic or arrow code, I can step through the computation and keep an operational picture of what's going on in my head. When I get to rec, I don't know what picture to keep! I get stuck, and I can't reason about such code.

The example I'm trying to grok is from Ross Paterson's paper on arrows, the one about circuits.

counter :: ArrowCircuit a => a Bool Int
counter = proc reset -> do
        rec     output <- returnA -< if reset then 0 else next
                next <- delay 0 -< output+1
        returnA -< output

I assume that if I understand this example, I'll be able to understand loop in general, and it'll go a great way towards understanding mfix. They feel essentially the same to me, but perhaps there is a subtlety I'm missing? Anyway, what I would really prize is an operational picture of such pieces of code, so I can step through them in my head like 'regular' code.

Edit: Thanks to Pigworker's answer, I have started thinking about rec and such as demands being fulfilled. Taking the counter example, the first line of the rec block demands a value called output. I imagine this operationally as creating a box, labelling it output, and asking the rec block to fill that box. In order to fill that box, we feed in a value to returnA, but that value itself demands another value, called next. In order to use this value, it must be demanded of another line in the rec block but it doesn't matter where in the rec block it is demanded, for now.

So we go to the next line, and we find the box labelled next, and we demand that another computation fill it. Now, this computation demands our first box! So we give it the box, but it has no value inside it, so if this computation demands the contents of output, we hit an infinite loop. Fortunately, delay takes the box, but produces a value without looking inside the box. This fills next, which then allows us to fill output. Now that output is filled, when the next input of this circuit is processed, the previous output box will have its value, ready to be demanded in order to produce the next next, and thus the next output.

How does that sound?

Soukup answered 8/8, 2011 at 1:11 Comment(10)
This question might help you understand how rec works.Hyperbaton
There's no general explanation of how rec or mfix works since it depends on which arrow/monad it is. For arrows I think a good picture in your mind is to imagine it as a feedback in a circuit. To find out how that actually works you have to look at individual instances.Faculty
Do you understand how plain fix works? i.e., fix f = let x = f x in x. It's the same idea.Staffman
@Faculty : You're right, however, I think I've formulated a generic picture that serves as a good heuristic to apply to whatever Arrow or Monad I'm working in is, and just filling in the details based on what definition of loop/mfix is. Can you give some feedback?Soukup
@C. A. McCann : Yeah, I understand fix, but when I look at a fix statement, I think of just expanding the statement until I hit the base case. It feels weird thinking about monadic values or arrow values that way.Soukup
What's the base case in fix $ (0:) . map (+1), then?Staffman
@C. A. McCann : So my intuition of fix breaks down for infinite codata. D'oh. >_< I guess I should be thinking of fix in terms of applying the function to 'undefined' repeatedly. In that case (0:(map (+1) undefined)) and then (0:1:(map (+2) undefined)), and so forth.Soukup
You don't really need to think of it as applying the function to anything--think of it as repeatedly composing it with itself an arbitrary number of times. The definition of fix is basically that it does this until every input to the chain of composed functions produces the same output, which is also the least interesting value x such that, for the original function f, f x == x.Staffman
@Soukup Here is the page that totally demystified fix for me.Chi
I am reading the paper, I compared with Hawk, this version is not working when you try to runSM :: SM a b -> [a] -> [b] with a loop in it. Did you manage to run it?Genesia
M
27

In this code, they key piece is the delay 0 arrow in the rec block. To see how it works, it helps to think of values as varying over time and time as chopped into slices. I think of the slices as ‘days’. The rec block explains how each day's computation works. It's organised by value, rather than by causal order, but we can still track causality if we're careful. Crucially, we must make sure (without any help from the types) that each day's work relies on the past but not the future. The one-day delay 0 buys us time in that respect: it shifts its input signal one day later, taking care of the first day by giving the value 0. The delay's input signal is ‘tomorrow's next’.

rec     output <- returnA -< if reset then 0 else next
        next <- delay 0 -< output+1

So, looking at the arrows and their outputs, we're delivering today's output but tomorrow's next. Looking at the inputs, we're relying on today's reset and next values. It's clear that we can deliver those outputs from those inputs without time travel. The output is today's next number unless we reset to 0; tomorrow, the next number is the successor of today's output. Today's next value thus comes from yesterday, unless there was no yesterday, in which case it's 0.

At a lower level, this whole setup works because of Haskell's laziness. Haskell computes by a demand-driven strategy, so if there is a sequential order of tasks which respects causality, Haskell will find it. Here, the delay establishes such an order.

Be aware, though, that Haskell's type system gives you very little help in ensuring that such an order exists. You're free to use loops for utter nonsense! So your question is far from trivial. Each time you read or write such a program, you do need to think ‘how can this possibly work?’. You need to check that delay (or similar) is used appropriately to ensure that information is demanded only when it can be computed. Note that constructors, especially (:) can act like delays, too: it's not unusual to compute the tail of a list, apparently given the whole list (but being careful only to inspect the head). Unlike imperative programming, the lazy functional style allows you to organize your code around concepts other than the sequence of events, but it's a freedom that demands a more subtle awareness of time.

Myocarditis answered 8/8, 2011 at 9:49 Comment(6)
Thanks for the in-depth explanation! Perhaps you could comment on my 'boxes being demanded' idea that I've added to the question?Soukup
Laziness here is perhaps like the principle "better to ask forgiveness than permission", where the possibility of non-termination is sort of a footnote in small print saying that, well no, you won't actually be forgiven if you mess up.Staffman
@Soukup Your 'box' metaphor for laziness does indeed seem reasonable. Looping is not caused by a mere cycle of reference, but only by a cycle in demands for values.Myocarditis
@C. A. McCann Re forgiveness vs permission, work is emerging (e.g. my own e-pig.org/epilogue/?p=186) to get more of a handle on laziness in the type system by means of an modal type operator capturing the notion of "a value tomorrow". We may yet manage to account for typical uses of laziness with a system of permission, rather than a footnote saying "but don't screw up". I'm optimistic.Myocarditis
I take it you let e-pig.org expire? Does your blog have a new home?Loera
Could you explain a little bit for causal order?Genesia

© 2022 - 2024 — McMap. All rights reserved.