They're densely packed with assumptions that I've read the latest in whatever branch of discrete math, category theory or abstract algebra is popular this week. (If I never read the words "consult the paper whatever for details of implementation" again, it will be too soon.)
Er, and what about simple first-order logic? forall
is pretty clearly in reference to universal quantification, and in that context the term existential makes more sense as well, though it would be less awkward if there were an exists
keyword. Whether quantification is effectively universal or existential depends on the placement of the quantifier relative to where the variables are used on which side of a function arrow and it's all a bit confusing.
So, if that doesn't help, or if you just don't like symbolic logic, from a more functional programming-ish perspective you can think of type variables as just being (implicit) type parameters to the function. Functions taking type parameters in this sense are traditionally written using a capital lambda for whatever reason, which I'll write here as /\
.
So, consider the id
function:
id :: forall a. a -> a
id x = x
We can rewrite it as lambdas, moving the "type parameter" out of the type signature and adding inline type annotations:
id = /\a -> (\x -> x) :: a -> a
Here's the same thing done to const
:
const = /\a b -> (\x y -> x) :: a -> b -> a
So your bar
function might be something like this:
bar = /\a -> (\f -> ('t', True)) :: (a -> a) -> (Char, Bool)
Note that the type of the function given to bar
as an argument depends on bar
's type parameter. Consider if you had something like this instead:
bar2 = /\a -> (\f -> (f 't', True)) :: (a -> a) -> (Char, Bool)
Here bar2
is applying the function to something of type Char
, so giving bar2
any type parameter other than Char
will cause a type error.
On the other hand, here's what foo
might look like:
foo = (\f -> (f Char 't', f Bool True))
Unlike bar
, foo
doesn't actually take any type parameters at all! It takes a function that itself takes a type parameter, then applies that function to two different types.
So when you see a forall
in a type signature, just think of it as a lambda expression for type signatures. Just like regular lambdas, the scope of forall
extends as far to the right as possible, up to enclosing parenthesis, and just like variables bound in a regular lambda, the type variables bound by a forall
are only in scope within the quantified expression.
Post scriptum: Perhaps you might wonder--now that we're thinking about functions taking type parameters, why can't we do something more interesting with those parameters than put them into a type signature? The answer is that we can!
A function that puts type variables together with a label and returns a new type is a type constructor, which you could write something like this:
Either = /\a b -> ...
But we'd need completely new notation, because the way such a type is written, like Either a b
, is already suggestive of "apply the function Either
to these parameters".
On the other hand, a function that sort of "pattern matches" on its type parameters, returning different values for different types, is a method of a type class. A slight expansion to my /\
syntax above suggests something like this:
fmap = /\ f a b -> case f of
Maybe -> (\g x -> case x of
Just y -> Just b g y
Nothing -> Nothing b) :: (a -> b) -> Maybe a -> Maybe b
[] -> (\g x -> case x of
(y:ys) -> g y : fmap [] a b g ys
[] -> [] b) :: (a -> b) -> [a] -> [b]
Personally, I think I prefer Haskell's actual syntax...
A function that "pattern matches" its type parameters and returns an arbitrary, existing type is a type family or functional dependency--in the former case, it even already looks a great deal like a function definition.