Lazy Evaluation: Why is it faster, advantages vs disadvantages, mechanics (why it uses less cpu; examples?) and simple proof of concept examples [closed]
Asked Answered
M

3

6

Lazy evaluation is said to be a way of delaying a process until the first time it is needed. This tends to avoid repeated evaluations and thats why I would imagine that is performing a lot faster. Functional language like Haskell (and JavaScript..?) have this functionality built-in.

However, I don't understand how and why other 'normal' approaches (that is; same functionality but not using lazy evaluation) are slower.. how and why do these other approaches do repeated evaluations? Can someone elaborate on this by giving simple examples and explaining the mechanics of each approach?

Also, according to Wikipedia page about lazy evaluation these are said to be the advantages of this approach:

  1. Performance increases by avoiding needless calculations, and error conditions in evaluating compound expressions
  2. The ability to construct potentially infinite data structures
  3. The ability to define control flow (structures) as abstractions instead of primitives

However, can we just control the calculations needed and avoid repeating the same ones? (1) We can use i.e. a Linked List to create an infinite data structure (2) Can we do (3) already..??? We can define classes/templates/objects and use those instead of primitives (i.e JavaScript).

Additionally, it seems to me that (at least from the cases i have seen), lazy evaluation goes hand-to-hand with recursion and using the 'head' and 'tail' (along with others) notions. Surely, there are cases where recursion is useful but is lazy evaluation something more than that...? more than a recursive approach to solving a problem..? Streamjs is JavaScript library that uses recursion along with some other simple operations (head,tail,etc) to perform lazy evaluation.

It seems i can't get my head around it...

Thanks in advance for any contribution.

Malocclusion answered 11/7, 2014 at 18:48 Comment(8)
This may be helpful to youDespoliation
I believe that this question is off-topic on SO, but on-topic on programmers SE.Bik
This is way too broad. Perhaps you want to study concrete examples from Haskell tutorials and the like sources, and compare them to equivalent examples in eager languages. If you formulate your questions in terms of these concrete examples, they are more likely to get concrete answers.Horten
Thanks @Despoliation . What i can't find are examples showing how the mechanics work. i.e. why some values are not computed when they are not going to be evaluated, as the example in the page you provided.Malocclusion
@Stelios This question addresses how lazy evaluation makes sorting efficient.Despoliation
The canonical example of the benefits of lazy evaluation is the fibonnaci sequence (The code given in the question).Gearing
Probably too broad. But here are two starting points for you: 1) Okasaki's Purely functional data structures diskusses exactly this. 2) SICP has an excellent part about streams as abstractions; IIRC, this is the right lecture, but you might have to look a bit back and forth.Flutist
Laziness encourages you to structure your programs in terms of building, transforming, and consuming data structures, and sometimes makes such programs efficient.Tegular
H
13

I'll show examples in both Python 2.7 and Haskell.

Say, for example, you wanted to do a really inefficient sum of all the numbers from 0 to 10,000,000. You could do this with a for loop in Python as

total = 0
for i in range(10000000):
    total += i
print total

On my computer, this takes about 1.3s to execute. If instead, I changed range to xrange (the generator form of range, lazily produces a sequence of numbers), it takes 1.2s, only slightly faster. However, if I check the memory used (using the memory_profiler package), the version with range uses about 155MB of RAM, while the xrange version uses only 1MB of RAM (both numbers not including the ~11MB Python uses). This is an incredibly dramatic difference, and we can see where it comes from with this tool as well:

Mem usage     Increment       Line Contents
===========================================
10.875 MiB    0.004 MiB       total = 0
165.926 MiB  155.051 MiB       for i in range(10000000):
165.926 MiB    0.000 MiB           total += i
                               return total

This says that before we started we were using 10.875MB, total = 0 added 0.004MB, and then for i in range(10000000): added 155.051MB when it generated the entire list of numbers [0..9999999]. If we compare to the xrange version:

Mem usage     Increment       Line Contents
===========================================
11.000 MiB    0.004 MiB       total = 0
11.109 MiB    0.109 MiB       for i in xrange(10000000):
11.109 MiB    0.000 MiB           total += i
                              return total

So we started with 11MB and for i in xrange(10000000): added only 0.109MB. This is a huge memory savings by only adding a single letter to the code. While this example is fairly contrived, it shows how not computing a whole list until the element is needed can make things a lot more memory efficient.


Python has iterators and generators which act as a sort of "lazy" programming for when you need to yield sequences of data (although there's nothing stopping you from using them for single values), but Haskell has laziness built into every value in the language, even user-defined ones. This lets you take advantage of things like data structures that won't fit in memory without having to program complicated ways around that fact. The canonical example would be the fibonacci sequence:

fibs = 1 : 1 : zipWith (+) fibs (tail fibs)

which very elegantly expresses this famous sequence to define a recursive infinite list generating all fibonacci numbers. It's CPU efficient because all values are cached, so each element only has to be computed once (compared to a naive recursive implementation)1, but if you calculate too many elements your computer will eventually run out of RAM because you're now storing this huge list of numbers. This is an example where lazy programming lets you have CPU efficiency, but not RAM efficiency. There is a way around this, though. If you were to write

fib :: Int -> Integer
fib n = let fibs = 1 : 1 : zipWith (+) fibs (tail fibs) in fibs !! n

then this runs in near-constant memory, and does so very quickly, but memoization is lost as subsequent calls to fib have to recompute fibs.

A more complex example can be found here, where the author shows how to use lazy programming and recursion in Haskell to perform dynamic programming with arrays, a feat that most initially think is very difficult and requires mutation, but Haskell manages to do very easily with "tying the knot" style recursion. It results in both CPU and RAM efficiency, and does so in fewer lines than I'd expect in C/C++.


All this being said, there are plenty of cases where lazy programming is annoying. Often you can build up huge numbers of thunks instead of computing things as you go (I'm looking at you, foldl), and some strictness has to be introduced to attain efficiency. It also bites a lot of people with IO, when you read a file to a string as a thunk, close the file, and then try to operate on that string. It's only after the file is closed that the thunk gets evaluated, causing an IO error to occur and crashes your program. As with anything, lazy programming is not without its flaws, gotchas, and pitfalls. It takes time to learn how to work with it well, and to know what its limitations are.


1) By "naive recursive implementation", I mean implementing the fibonacci sequence as

fib :: Integer -> Integer
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)

With this implementation, you can see the mathematical definition very clearly, it's very much in the style of inductive proofs, you show your base cases and then the general case. However, if I call fib 5, this will "expand" into something like

fib 5 = fib 4 + fib 3
      = fib 3 + fib 2 + fib 2 + fib 1
      = fib 2 + fib 1 + fib 1 + fib 0 + fib 1 + fib 0 + fib 1
      = fib 1 + fib 0 + fib 1 + fib 1 + fib 0 + fib 1 + fib 0 + fib 1
      = 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
      = 8

When instead we'd like to share some of those computations, that way fib 3 only gets computed once, fib 2 only gets computed once, etc.

By using a recursively defined list in Haskell, we can avoid this. Internally, this list is represented something like this:

fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
     = 1 : 1 : zipWith (+) (f1:f2:fs) (f2:fs)
       ^--------------------^  ^       ^
           ^-------------------|-------|
     = 1 : 1 : 2 : zipWith (+) (f2:f3:fs) (f3:fs)
           ^--------------------^  ^       ^
               ^-------------------|-------|
     = 1 : 1 : 2 : 3 : zipWith (+) (f3:f4:fs) (f4:fs)
               ^--------------------^  ^       ^
                   ^-------------------|-------|

So hopefully you can see the pattern forming here, as the list is build, it keeps pointers back to the last two elements generated in order to compute the next element. This means that for the nth element computed, there are n-2 additions performed. Even for the naive fib 5, you can see that there are more additions performed than that, and the number of additions will continue to grow exponentially. This definition is made possible through laziness and recursions, letting us turn an O(2^n) algorithm into an O(n) algorithm, but we have to give up RAM to do so. If this is defined at the top level, then values are cached for the lifetime of the program. It does mean that if you need to refer to the 1000th element repeatedly, you don't have to recompute it, just index it.

On the other hand, the definition

fib :: Int -> Integer
fib n =
    let fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
    in fibs !! n

uses a local copy of fibs every time fib is called. We don't get caching between calls to fib, but we do get local caching, leaving our complexity O(n). Additionally, GHC is smart enough to know that we don't have to keep the beginning of the list around after we've used it to calculate the next element, so as we traverse fibs looking for the nth element, it only needs to hold on to 2-3 elements and a thunk pointing at the next element. This saves us RAM while computing it, and since it isn't defined at a global level it doesn't eat up RAM over the lifetime of the program. It's a tradeoff between when we want to spend RAM and CPU cycles, and different approaches are better for different situations. These techniques are applicable to much of Haskell programming in general, not just for this sequence!

Homesick answered 11/7, 2014 at 20:6 Comment(2)
Regarding IO and the file closing problem; I'd say this is more an indictment of the so-called "lazy IO" than it is one of laziness in general. What's actually happening is the storing of IO actions inside lazy but supposedly pure values and running them via a variant of unsafePerformIO as the values are scrutinized. That kind of thing just doesn't happen if your suspended evaluations are really pure.Cacie
Excellent analysis. Can you elaborate more on "so each element only has to be computed once (compared to a naive recursive implementation)"? So normal recursion will computed elements more than once?Malocclusion
C
8

Lazy evaluation is not, in general, faster. When it's said that lazy evaluation is more efficient, it is because when you consider Lambda Calculus (which is essentially what your Haskell programs are once the compiler finishes de-sugaring them) as a system of terms and reduction rules, then applying those rules in the order specified by the rules of a call-by-name with sharing evaluation policy always applies the same or fewer reduction rules than when you follow the rules in the order specified by call-by-value evaluation.

The reason that this theoretical result does not make lazy evaluation faster in general is that the translation to a linear sequential machine model with a memory access bottleneck tends to make all the reductions performed much more expensive! Initial attempts at implementing this model on computers led to programs that executed orders of magnitude more slowly than typical eagerly-evaluating language implementations. It has taken a lot of research and engineering into techniques for implementing lazy evaluation efficiently to get Haskell performance to where it is today. And the fastest Haskell programs take advantage of a form of static analysis called "strictness analysis" which attempts to determine at compile time which expressions will always be needed so that they can be evaluated eagerly rather than lazily.

There are still some cases where straightforward implementations of algorithms will execute faster in Haskell due to only evaluating terms that are needed for the result, but even eager languages always have some facility for evaluating some expressions by need. Conditionals and short-circuiting boolean expressions are ubiquitous examples, and in many eager languages, one can also delay evaluation by wrapping an expression in an anonymous function or some other sort of delaying form. So you can typically use these mechanisms (or even more awkward rewrites) to avoid evaluating expensive things that won't be necessary in an eager language.

The real advantage of Haskell's lazy evaluation is not a performance-related one. Haskell makes it easier to pull expressions apart, re-combine them in different ways, and generally reason about code as if it were a system of mathematical equations instead of being a sequentially-evaluated set of machine instructions. By not specifying any evaluation order, it forced the developers of the language to avoid side-effects that rely on a simple evaluation ordering, such as mutation or IO. This in turn led to a host of elegant abstractions that are generally useful and might not have been developed into usability otherwise.

The state of Haskell is now such that you can write high-level, elegant algorithms that make better re-use of existing higher-order functions and data structures than in nearly any other high-level typed language. And once you become familiar with the costs and benefits of lazy evaluation and how to control when it occurs, you can ensure that the elegant code also performs very well. But getting the elegant code to a state of high performance is not necessarily automatic and may require a bit more thought than in a similar but eagerly-evaluated language.

Cacie answered 11/7, 2014 at 21:12 Comment(2)
Thanks for the answer. So in terms of lazy evaluation, you are suggesting that we can achieve a (more or less) the same in an eagerly-evaluating language by having conditionals (booleans) etc... So the advantage of a lazy evaluation language (thats is; one which has this built-in, i.e. Haskell) is that it will do it for us, identify what processes are not needed to be evaluate unless of course they are requested. Is that right?Malocclusion
Lazy evaluation does not evaluate arguments until they are needed inside the body of the function. This has a performance cost, which outweighs the occasional situation in which an expensive argument didn't really need to be evaluated. Eager languages have ways to avoid evaluating things too, but they are typically a little more awkward. So on-demand evaluation is a win in some ways, but not necessarily a performance win!Cacie
G
2

The concept of "lazy evaluation" is only about 1 thing, and only about that 1 thing:

The ability to postpone evaluation of something until needed

That's it.

Everything else in that wikipedia article follows from it.

Infinite data structures? Not a problem. We'll just make sure we don't actually figure out what the next element is until you actually ask for it. For instance, asking some code what the next value after X is, if the operation to perform is just to increase X by 1, will be infite. If you create a list containing all those values, it's going to fill your available memory in the computer. If you only figure out what the next value is when asked, not so much.

Needless calculations? Sure. You can return an object containing a lot of properties that when asked will provide you with some value. If you don't ask (ie. never inspect the value of a given property), the calculation necessary to figure out the value of that property will never be done.

Control flow ... ? Not at all sure what that is about.

The purpose of lazy evaluation of something is exactly as I stated to begin with, to avoid evaluating something until you actually need it. Be it the next value of something, the value of a property, whatever, adding support for lazy evaluation might conserve CPU cycles.

What would the alternative be?

I want to return an object to the calling code, containing any number of properties, some of which might be expensive to calculate. Without lazy evaluation, I would have to calculate the values of all those properties either:

  1. Before constructing the object
  2. After constructing the object, on the first time you inspected a property
  3. After constructing the object, every time you inspected that property

With lazy evaluation you usually end up with number 2. You postpone evaluating the value of that property until some code inspects it. Note that you might cache the value once evaluated, which would save CPU cycles when inspecting the same property more than once, but that is caching, not quite the same, but in the same line of work: optimizations.

Gilgilba answered 11/7, 2014 at 20:29 Comment(4)
So say that you created a function fib which returns the first 100 fibonacci numbers. If we were to access the 50th value, that would have been precomputed and we will retrieve the value immediately (all others values computed and stored anyway: more cpu+ram). A lazy implementation will compute that value at the time we need it; when we request for it. So it won't give us the value immediately, it needs to computed it first. Isn't this going to be slower during that time?Malocclusion
In a functional language, returning a sequence of numbers and then indexing to one of them is done by composing two functions; the composition of the two evaluates to the value you're interested in. An eager evaluation of the composition will have fib evaluate the full sequence before the indexing function body gets it. A lazy evaluation will start to index before the sequence is evaluated. If fib is implemented via a recursive list builder and the index operation is implemented as a guarded recursive list consumer, their executions will be interleaved.Cacie
If the fib implementation is too lazy in the expressions used to calculate the numbers, it will actually be a list of unevaluated expressions; the index function will trigger the building of each new list element, but it only looks at the value of the 50th one. The list structure will be garbage at that point, but only 50 will have been built. The expression for the fib value will refer to previous unevaluated expressions, though, so the memory savings will not be big, but the expression will only have to calculuate up to fib 50 instead of fib 100. It evaluates when you look at the value.Cacie
So in the lazy version, you index the list first then return the calculation stored there. It's only built to the point you asked for. It's evaluated when you examine it. In the eager version, all 100 are built and stored in a list; indexing then traverses and returns the 50th value, then the whole list becomes garbage.Cacie

© 2022 - 2024 — McMap. All rights reserved.