Apparently it is possible to implement Haskell such that it evaluates eagerly without changing the semantics of the language at all. If that is true, how are infinite data structures handled?
http://csg.csail.mit.edu/pubs/haskell.html
Therefore, a great deal of time is spent creating and destroying suspended pieces of computation (thunks). All too often, these computations are simple enough that it would be just as easy to evaluate them instead. Faxen and others have used static analysis to expose such opportunities for eagerness. We instead propose using eagerness everywhere, while using mechanisms which permit us to recover if our program is too eager.
The key thing there being "we have mechanisms to recover if our program is too eager". What are these mechanism? How do they permit for infinite data structures and the other aspects of lazy evaluation that I've been led to believe are impossible in an eager language?