Curry-Howard isomorphism
Asked Answered
R

3

61

I've searched around the Internet, and I can't find any explanations of CHI which don't rapidly degenerate into a lecture on logic theory which is drastically over my head. (These people talk as if "intuitionistic proposition calculus" is a phrase that actually means something to normal humans!)

Roughly put, CHI says that types are theorems, and programs are proofs of those theorems. But what the hell does that even mean??

So far, I've figured this out:

  • Consider id :: x -> x. Its type says "given that X is true, we can conclude that X is true". Seems like a reasonable theorem to me.

  • Now consider foo :: x -> y. As any Haskell programmer will tell you, this is impossible. You cannot write this function. (Well, without cheating anyway.) Read as a theorem, it says "given that any X is true, we can conclude that any Y is true". This is obviously nonsense. And, sure enough, you cannot write this function.

  • More generally, the function's arguments can be considered "this which are assumed to be true", and the result type can be considered "thing that is true assuming all the other things are". If there's a function argument, say x -> y, we can take that as an assumption that X being true implies that Y must be true.

  • For example, (.) :: (y -> z) -> (x -> y) -> x -> z can be taken as "assuming that Y implies Z, that X implies Y, and that X is true, we can conclude that Z is true". Which seems logically sensible to me.

Now, what the hell does Int -> Int mean?? o_O

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this.

That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

Roof answered 18/4, 2012 at 15:24 Comment(4)
#2969640Griseous
Read that before I posted this - and rapidly got lost... :-SRoof
Just to be fair, most "normal humans" don't look up Curry-Howard Isomorphism...Howardhowarth
@Howardhowarth Well, that's fair enough I guess. :-)Roof
A
52

The Curry-Howard isomorphism simply states that types correspond to propositions, and values correspond to proofs.

Int -> Int doesn't really mean much interesting as a logical proposition. When interpreting something as a logical proposition, you're only interested in whether the type is inhabited (has any values) or not. So, Int -> Int just means "given an Int, I can give you an Int", and it is, of course, true. There are many different proofs of it (corresponding to various different functions of that type), but when taking it as a logical proposition, you don't really care.

That doesn't mean interesting propositions can't involve such functions; just that that particular type is quite boring, as a proposition.

For an instance of a function type that isn't completely polymorphic and that has logical meaning, consider p -> Void (for some p), where Void is the uninhabited type: the type with no values (except ⊥, in Haskell, but I'll get to that later). The only way to get a value of type Void is if you can prove a contradiction (which is, of course, impossible), and since Void means you've proved a contradiction, you can get any value from it (i.e. there exists a function absurd :: Void -> a). Accordingly, p -> Void corresponds to ¬p: it means "p implies falsehood".

Intuitionistic logic is just a certain kind of logic that common functional languages correspond to. Importantly, it's constructive: basically, a proof of a -> b gives you an algorithm to compute b from a, which isn't true in regular classical logic (because of the law of excluded middle, which will tell you that something is either true or false, but not why).

Even though functions like Int -> Int don't mean much as propositions, we can make statements about them with other propositions. For instance, we can declare the type of equality of two types (using a GADT):

data Equal a b where
    Refl :: Equal a a

If we have a value of type Equal a b, then a is the same type of b: Equal a b corresponds to the proposition a = b. The problem is that we can only talk about equality of types this way. But if we had dependent types, we could easily generalise this definition to work with any value, and so Equal a b would correspond to the proposition that the values a and b are identical. So, for instance, we could write:

type IsBijection (f :: a -> b) (g :: b -> a) =
    forall x. Equal (f (g x)) (g (f x))

Here, f and g are regular functions, so f could easily have type Int -> Int. Again, Haskell can't do this; you need dependent types to do things like this.

Typical functional languages are not really well-suited to writing proofs, not only because they lack dependent types, but also because of ⊥, which, having type a for all a, acts as a proof of any proposition. But total languages like Coq and Agda exploit the correspondence to act as proof systems as well as dependently-typed programming languages.

Anemone answered 18/4, 2012 at 15:33 Comment(8)
So Int -> Int means "given that Int is inhabited, we can conclude that Int is inhabited"?Roof
This is a great answer. I think that people read "propositions as types" and immediately think their programs automatically produce interesting proofs, this is not the case, the reality is that any interesting proof about a program will involve the use of complicated dependent types forming propositions in intuitionistic logic.Debunk
@MathematicalOrchid: Yes, given that the function is total.Paramnesia
Nice answer. +1 for explaining WTF "inhabited" actually means. I always wanted to know...Roof
@MathematicalOrchid: It means "Int implies Int". The problem is that Int isn't a very interesting proposition; it's trivially true, and has many proofs (0, 1, 2, etc.). But in a dependently-typed language, you can use types to make statements about these functions; see my updated answer for details.Anemone
Just because Agda and Coq are total, it doesn't make them "non-Turing-complete". You can write a Turing machine simulator in Agda: you just can't give it a type which promises that it will halt; you have to use a type which acknowledges the potentially infinite delay. From a logical perspective, an "infinitely delayable proof of X" should certainly not convince you of X. In Haskell, non-totality means that you can't be sure of X, even if you have a value in X. Totality buys you the CHI, and you don't have to give up Turing-completeness: you just have to give up exaggerating.Fye
@pigworker: Right, but you still need assistance from the RTS if you want to run a Turing-complete computation to its potential end, which is what's expected of a Turing-complete language. I've edited my answer to use "total" instead; I originally avoided it since it's a more obscure term.Anemone
Thanks. Of course, you need assistance from the RTS to do anything! Total languages offer the consumer the choice to run infinitary computations as much or as little as desired. Partial languages offer only the dangerous version. It's not true that "you can't write Haskell programs which interact with files", and it's not true that "you can't write Agda programs which run for ever". People genuinely seem to believe that totality is some sort of limitation in that respect, when the truth is that it's the partial languages which are limited in the promises they allow you to make.Fye
G
2

Perhaps the best way to understand what it means is to start (or try) using types as propositions and programs as proofs. It is better to learn a language with dependent types, such as Agda (it's written in Haskell and similar to Haskell). There are various articles and courses on that language. Learn you an Agda is incomplete, but it's try to simplify things, just like LYAHFGG book.

Here is an example of a simple proof:

{-# OPTIONS --without-K #-} -- we are consistent

module Equality where

-- Peano arithmetic.
-- 
--   ℕ-formation:     ℕ is set.
-- 
--   ℕ-introduction:  o ∈ ℕ,
--                    a ∈ ℕ | (1 + a) ∈ ℕ.
-- 
data ℕ : Set where
  o : ℕ
  1+ : ℕ → ℕ

-- Axiom for _+_.
-- 
--   Form of ℕ-elimination.
-- 
infixl 6 _+_
_+_ : ℕ → ℕ → ℕ
o + m = m
1+ n + m = 1+ (n + m)

-- The identity type for ℕ.
-- 
infix 4 _≡_
data _≡_ (m : ℕ) : ℕ → Set where
  refl : m ≡ m

-- Usefull property.
-- 
cong : {m n : ℕ} → m ≡ n → 1+ m ≡ 1+ n
cong refl = refl

-- Proof _of_ mathematical induction:
-- 
--   P 0, ∀ x. P x → P (1 + x) | ∀ x. P x.
-- 
ind : (P : ℕ → Set) → P o → (∀ n → P n → P (1+ n)) → ∀ n → P n
ind P P₀ _ o = P₀
ind P P₀ next (1+ n) = next n (ind P P₀ next n)

-- Associativity of addition using mathematical induction.
-- 
+-associative : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative m n p = ind P P₀ is m
  where
    P : ℕ → Set
    P i = (i + n) + p ≡ i + (n + p)
    P₀ : P o
    P₀ = refl
    is : ∀ i → P i → P (1+ i)
    is i Pi = cong Pi

-- Associativity of addition using (dependent) pattern matching.
-- 
+-associative′ : (m n p : ℕ) → (m + n) + p ≡ m + (n + p)
+-associative′ o _ _ = refl
+-associative′ (1+ m) n p = cong (+-associative′ m n p)

There you can see (m + n) + p ≡ m + (n + p) proposition as type and its proof as function. There are more advanced techniques for such proofs (e.g., preorder reasoning, combinators in Agda is like tactics in Coq).

What else can be proven:

  • head ∘ init ≡ head for vectors, here.

  • Your compiler produce a program which execution gives the same value as the value obtained in the interpretation of the same (host) program, here, for Coq. This book is also a good reading on the topic of language modeling and program verification.

  • Anything else what can be proven in constructive mathematics, since Martin-Löf's type theory in its expressive power is equivalent to ZFC. In fact, Curry-Howard isomorphism can be extended to physics and topology and to algebraic topology.

Gabionade answered 18/4, 2012 at 18:8 Comment(2)
This is an entire other question, of course, but one time I did try to comprehend one of those advanced languages. (I forget whether it was Agda, Coq, Epigram or what.) The documentation casually tosses around technical terms who's meaning is utterly opaque to me, and I rapidly became lost...Roof
@Roof CH is an isomorphism, it illuminates a relationship between concepts of two different domains. You can have an idea about one domain (simply-typed lambda calculus or Per Martin-Löf’s dependent type theory), but in order to construct an isomorphism you must also be aware of the other domain (system of natural deduction or intuitionistic logic, respectively).Gabionade
W
2

The only sensible answer I can come up with is this: If you have a function X -> Y -> Z, then the type signature says "assuming that it's possible to construct a value of type X, and another of type Y, then it is possible to construct a value of type Z". And the function body describes exactly how you would do this. That seems to make sense, but it's not very interesting. So clearly there must be more to it than this...

Well, yes, there is quite a lot more, because it has a lot of implications and opens up a lot of questions.

First of all, your discussion of the CHI is framed exclusively in terms of implication/function types (->). You don't talk about this, but you have probably have seen also how conjunction and disjunction correspond to product and sum types respectively. But what about other logical operators like negation, universal quantification and existential quantification? How do we translate logical proofs involving these to programs? It turns out that it's roughly like this:

  • Negation corresponds to first-class continuations. Don't ask me to explain this one.
  • Universal quantification over propositional (not individual) variables corresponds to parametric polymorphism. So for example, the polymorphic function id really has the type forall a. a -> a
  • Existential quantification over propositional variables corresponds to a handful of things that have to do with data or implementation hiding: abstract data types, module systems and dynamic dispatch. GHC's existential types are related to this.
  • Universal and existential quantification over individual variables leads to dependent type systems.

Other than that, it also means that all sorts of proofs about logics instantly translate into proofs about programming languages. For example, the decidability of intuitionistic propositional logic implies termination of all programs in simply-typed lambda calculus.

Now, what the hell does Int -> Int mean?? o_O

It's a type, or alternatively a proposition. In f :: Int -> Int, (+1) names a "program" (in a specific sense that admits both of functions and constants as "programs", or alternatively a proof. The semantics of the language must either provide f as a primitive rule of inference, or demonstrate how f is a proof that can be constructed from such rules and premises.

These rules are often specified in terms of equational axioms that define the base members of a type and rules that allow you to prove what other programs inhabit that type. For example, switching from Int to Nat (natural numbers from 0 forward), we could have these rules:

  • Axiom: 0 :: Nat (0 is a primitive proof of Nat)
  • Rule: x :: Nat ==> Succ x :: Nat
  • Rule: x :: Nat, y :: Nat ==> x + y :: Nat
  • Rule: x + Zero :: Nat ==> x :: Nat
  • Rule: Succ x + y ==> Succ (x + y)

These rules are enough to prove many theorems about addition of natural numbers. These proofs will also be programs.

Westernmost answered 18/4, 2012 at 18:47 Comment(1)
I didn't mention it, but yeah, I figured that a record of types is like a logical AND, and something like Either is a logical OR.Roof

© 2022 - 2024 — McMap. All rights reserved.