I have little experience with algebraic data types, because I work in a language without native support. Usually one can use continuation passing style to get a remotely similar experience, but the handling of CPS encoded types is less comfortable.
Considering this why would a library like Parsec use CPS?
newtype ParsecT s u m a
= ParsecT {unParser :: forall b .
State s u
-> (a -> State s u -> ParseError -> m b) -- consumed ok
-> (ParseError -> m b) -- consumed err
-> (a -> State s u -> ParseError -> m b) -- empty ok
-> (ParseError -> m b) -- empty err
-> m b
}
One clue would be the try
parser, which rules out the consumed error case by passing the empty error continuation in both cases:
try :: ParsecT s u m a -> ParsecT s u m a
try p =
ParsecT $ \s cok _ eok eerr ->
unParser p s cok eerr eok eerr
-- ^^^^ ^^^^
This is possible, because both continuations cerr
and eerr
have the same type and only differ in their position, which reminds me of structural typing. While I think you cannot do this with ADTs, there is probably a way to implement the same behavior with them. Apart from this, a single combinator wouldn't justify the far-reachuing decision to rely on CPS. So why was this decision made?
serialise
package uses CPS, but takes other steps to avoid relying too much on the optimizer (maybe eschewing specialization?). Then again, parsing is not the same as deserialization. – Pendulum