Usually monad stuff is easier to grasp when starting with "collection-like" monads as example. Imagine you calculate the distance of two points:
data Point = Point Double Double
distance :: Point -> Point -> Double
distance p1 p2 = undefined
Now you may have a certain context. E.g. one of the points may be "illegal" because it is out of some bounds (e.g. on the screen). So you wrap your existing computation in the Maybe
monad:
distance :: Maybe Point -> Maybe Point -> Maybe Double
distance p1 p2 = undefined
You have exactly the same computation, but with the additional feature that there may be "no result" (encoded as Nothing
).
Or you have a have a two groups of "possible" points, and need their mutual distances (e.g. to use later the shortest connection). Then the list monad is your "context":
distance :: [Point] -> [Point] -> [Double]
distance p1 p2 = undefined
Or the points are entered by a user, which makes the calculation "nondeterministic" (in the sense that you depend on things in the outside world, which may change), then the IO
monad is your friend:
distance :: IO Point -> IO Point -> IO Double
distance p1 p2 = undefined
The computation remains always the same, but happens to take place in a certain "context", which adds some useful aspects (failure, multi-value, nondeterminism). You can even combine these contexts (monad transformers).
You may write a definition that unifies the definitions above, and works for any monad:
distance :: Monad m => m Point -> m Point -> m Double
distance p1 p2 = do
Point x1 y1 <- p1
Point x2 y2 <- p2
return $ sqrt ((x1-x2)^2 + (y1-y2)^2)
That proves that our computation is really independent from the actual monad, which leads to formulations as "x is computed in(-side) the y monad".
a -> m b
, but also tom (a -> b)
. Since this last one is used in applicative style I prefer a less ambiguous term. – Millstream