I have learnt about Monoidal
being an alternative way to represent Applicative
not so long ago. There is an interesting question on Typeclassopedia:
- (Tricky) Prove that given your implementations from the first exercise [
pure
and(<*>)
written down usingunit
and(**)
and the other way around], the usualApplicative
laws and theMonoidal
laws stated above are equivalent.
Here are these classes and laws:
-- A note from https://wiki.haskell.org/Typeclassopedia#Alternative_formulation:
-- In this and the following laws, ≅ refers to isomorphism rather than equality.
-- In particular we consider (x,()) ≅ x ≅ ((),x) and ((x,y),z) ≅ (x,(y,z)).
-- Monoidal.
class Functor f => Monoidal f where
unit :: f ()
(**) :: f a -> f b -> f (a,b)
-- unit ** v ≅ v - Left Identity.
-- u ** unit ≅ u - Right Identity.
-- u ** (v ** w) ≅ (u ** v) ** w - Associativity.
-- Applicative.
class Functor f => Applicative f where
pure :: a -> f a
infixl 4 <*>, ...
(<*>) :: f (a -> b) -> f a -> f b
...
-- pure id <*> v = v - Identity.
-- pure f <*> pure x = pure (f x) - Homomorphism.
-- u <*> pure y = pure ($ y) <*> u - Interchange.
-- u <*> (v <*> w) = pure (.) <*> u <*> v <*> w - Composition.
Writing down combinators using others is no big deal:
unit = pure ()
f ** g = (,) <$> f <*> g = liftA2 (,) f g
pure x = const x <$> unit
f <*> g = uncurry ($) <$> (f ** g)
Here is my understanding of why the laws are telling us the same thing:
u <*> pure y = pure ($ y) <*> u -- Interchange: Applicative law.
The first thing we shall notice is that ($ y) ≅ y
(more formally: (y -> a) -> a ≅ y
). Having that in mind, Interchange law simply tells us that (a, b) ≅ (b, a)
.
pure id <*> v = v -- Identity: Applicative law.
I reckon id
to be something of a unit itself as it is the only inhabitant of type forall a. a -> a
. Therefore, this law gives us the Left Identity:
unit ** v = v -- Left Identity: Monoidal law.
Now we can use that (a, b) ≅ (b, a)
to write the Right Identity down:
u ** unit = u -- Right Identity: Monoidal law.
The Composition law:
u <*> (v <*> w) = pure (.) <*> u <*> v <*> w -- Composition: Applicative law.
I reckon this law to tell the same thing as Associativity for Monoidal
:
u ** (v ** w) ≅ (u ** v) ** w
That is, (a, (b, c)) ≅ ((a, b), c)
. Applicative
just adds a layer of application.
So, we have covered all of the Monoidal
laws. I believe there is no need to do it the other way around as we are going to use the same isomorphisms. But one could have noticed something odd - we did not use the Homomorphism Applicative
law:
pure f <*> pure x = pure (f x)
I tried understanding Homomorphism in terms of the Naturality free theorem for Monoidal
:
fmap (g *** h) (u ** v) = fmap g u ** fmap h v
But it seems odd as Homomorphism does not deal with side-effects, yet Naturality works with them just fine.
So, I have 3 questions:
- Is my reasoning right?
- Where does Homomorphism stand in this picture?
- How can we understand the Naturality free theorem in terms of
Applicative
?
1. Implementation. Applicative --> Monoidal
,2. Implementation. Monoidal --> Applicative
. Now prove each Monoidal Law given the Applicative Laws and the1.
implementation; then prove each Applicative Law given the Monoidal Laws and the2.
implementation. At each proof, start with the premise, and arrive at the needed conclusion while carefully documenting at each step what known laws you've used to perform it. It's mechanical. If you want to get a feel about it, do it after you've written it all down. That's what I would do. – Reminiscence($ y) ≅ y
", I don't think this is right at all. – Reminiscence\ t -> t id
; from right to left -\ t -> ($ t)
. Where am I wrong? – Idelly
and($ y)
are values? perhaps I didn't quite get your meaning there. – Reminiscence$
type operator. I guess it was clumsy, but it seemed fitting for an intuition. Is this a bad way to put things? To be honest, all my experience of proofs of such sort are just, as you have said, mechanical work. If you could give me some example where such style is carried out right, I would appreciate it very much. – Idell