*(This was originally posted to reddit, but since the response to it spawned this blog, I figured I’d repost it here. It could use some editing, but c’est la vie…)*

So, I’ve wanted to write a post with a nice, simple but non-trivial logic result for a while now, but I’ve only now found a good reddit-friendly topic: *In homotopy type theory, the negation of any type is a proposition.*

Don’t let that cryptic sentence scare you! The point of the next ~8000 characters is to explain what that means, and why it’s true. First, a (very) rough sketch of type theory (skip below the line if you already know some type theory, or if you are optimistic.)

In type theory, we have *terms* and *types*. Terms *inhabit* types–to start, think of terms as “elements” and types as “sets”, but there are some crucial differences:

- A term always belongs to some type: there is no universe to arbitrarily draw types from, giving a term
*a priori*means giving its type as well. We write`a : A`

to say “the term`a`

has type`A`

.” - Types are not extensional: rather than being determined by their members, a type is determined by
*introduction rules*which say how to*construct*their terms, and*elimination rules*which say how to*use*their terms. (there are other rules, but let’s not worry about those for now) - Reasoning is “internal” to the system (more below): The meta-system is actually a rather weak system of judgments, so unlike set theory, we don’t have the full power of logic at our back; instead we build our logic
*inside*of our system. - Equality is really weird. This is related to the bit about extensionality above, but all you really need to know for now is that equality in type theory is a “weak” notion, which often isn’t strong enough to prove “nice” things. Oh, and also: equality between terms is a
**type**. That is, given two terms`x,y:A`

, we have a type`x=y`

(Or`x=y:A`

, if we need to be clear about which type we’re working in).

Today, we’re interested in *function* types `A->B`

and *dependent function* types Π(x:A) B(x). Given two types, we have a type of functions between them. I’ll assume this makes sense. A *dependent* function does not have **an** output type, but rather *for each* `x:A`

a (possibly) **different** output type `B(x)`

. Whereas for a function `f:A->B`

, we have `f(x): B`

for all `x:A`

, for a *dependent* function `d: Π(x:A) B(x)`

, we have `d(x) : B(x)`

. Don’t worry if this doesn’t make sense: we’ll only use the interpretation of Π as “for all” (discussed below). Also, importantly: the existence of a type doesn’t mean it is inhabited. So just because we have a *type* `A->B`

doesn’t mean we have a term `f:A->B`

.

We are also interested in the empty type `0`

. This has no introduction rules, and an elimination rule which is “given any type`C`

, there is a function `f:0->C`

. [Actually: it says for any type family `C(x)`

, indexed by `0`

, we have a dependent function`f:Π(x:0) C(x)`

.]

The “judgments” mentioned above take one of two forms: *typing* judgments (`a : A`

above), and *equality* judgments (which we will ignore, because this is the “wrong” notion of equality). This means that if we build logic inside of the system, we need a proof to correspond to a judgment `p:P`

, and from that comes an idea: let us treat *propositions* as *types*, and*proofs* as *terms*. Then a proposition P is just some (particular) type, and proving it means constructing a term (called a “witness” or “proof”) `p:P`

This is called *propositions as types*: propositions are types, and connectives are type forming operations. For example, `A->B`

is interpreted as “A implies B”, `Π(x:A) B(x)`

is interpreted as “For all x in A, B(x)” and `0`

is interpreted as “false”. Under this interpretation, the elimination rule for `0`

is simply explosion.

This logic can be somewhat odd: the axiom of choice (naively translated) is a tautology, for example. Also, importantly: excluded middle is inconsistent with univalence, the thing that makes HoTT interesting. The problem seems to be that “proving” (giving a witness of) a generic type `P`

, can contain a lot more information that just “`P`

is inhabited” (notably,*which* inhabitant). So in HoTT, we do something clever: we define a type to be a proposition when we can prove any two inhabitants are equal. That is: `A`

is a proposition if the type `Π(x,y:A) x=y`

is inhabited. (This type is read “for all x,y in A, x=y”). You can think of this as “there is (at most) one term of type `A`

“, but this isn’t *exactly* right. Better is “up to homotopy, there is (at most) one term of type `A`

.” Fortunately, the logic of propositions tends to behave much closer to the “expected” logic than the logic of all types.

One last thing before we can make sense of the statement: Often in logic, the negation of a proposition P is defined to be “P implies false”. That is, given P, we can reach an absurdity. This idea translates to type theory as expected: given a type `A`

, we define the *negation* of `A`

, “¬A” to be the type `A->0`

.

So, let’s repeat the statement: Given any type `A`

, we have `¬A`

(that is, the type `(A->0)`

) is a proposition (that is, we can prove that all elements of this type are equal.)

Before we prove it, let’s talk about why this is interesting: In a sentence: double negation shows up a lot. :)

Since type theory is naturally constructive, the proposition “not not A” is usually weaker than the proposition “A” (This goes for non-propositions as well). It turns out that very often, instead of proving “A implies B”, we can prove the stronger “not(not(A)) implies B”, which is good because negative types have useful properties , and because it means that very often we can replace a difficult-to-verify claim (A is inhabited) by an easier-to-verify (but classically equivalent) claim (A isn’t not inhabited). This is all related related to double-negation translation which is used to do classical logic in constructive settings. As a consequence of the above result, we get that the *double* negation of any type is a proposition, so double-negation translation is a “classical propositional truncation” (as compared to “propositional truncation” used in HoTT), which is interesting and probably useful [*added to this post:* in fact, if we assume the law of excluded middle, we can define the propositional truncation to simply be double negation.] Lastly, the intuition I gave above is that type theory has “too much information” to be a well-behaved logic, and here we see that by translating to classical logic by double negation, we are *forgetting information*, reinforcing that point. (In fact, simply by negating a type we are forgetting some information.)

Ok, how do we prove it? Besides the notion of proposition, we need one thing particular to homotopy type theory (which is actually reasonably common among non-homotopy type theorists as well): the axiom of *function extensionality*. Remember I said equality is a weak notion in type theory? Well it turns out that for two functions `f,g: A->B`

even if we can prove that for all `x:A`

, we have `f(x)=g(x)`

, we *can’t* prove that `f=g`

. So we must assume this as an axiom. Translating all of this *into* type theory, we have:

**Function extensionality**: Given two types `A`

and `B`

, functions `f,g: A->B`

, there is a term `funext: (Π(x:A) f(x)=g(x)) -> f=g`

. That is, we have this function `funext`

(which has some hidden parameters), which takes a proof that f and g are “pointwise equal”, and turns it into a proof that `f=g`

.

Now, the proof is easy:**Claim**: For any type `A`

, we have that `¬A`

is a proposition.

**Proof**: We need to show that `Π(x,y:¬A) x= y`

is inhabited. To this end, fix `x,y: A -> 0`

, and we will use function extensionality to construct a term `p:x=y`

. Let `z:A`

. We wish to see that `x(z)=y(z)`

in the type `0`

, but this holds vacuously (or rather, by elimination on `0`

, but think “vacuously”). Since `z`

was arbitrary, we have a term `h(x,y):Π(z:A) x(z)=y(z)`

. Then `funext(h) : x=y`

. Again, `x`

and `y`

were arbitrary, so we have a term `f:Π(x,y:¬A) x=y`

, defined by`f(x,y) := funext(h(x,y))`

. That is, we have “for all `x,y:¬A`

, x=y”. That is, `¬A`

is a proposition.

Hurray! Hopefully you find this interesting/insightful. Let me know if you have any questions. Also, I said a few bold-faced lies to simplify things, but hopefully I didn’t say anything harmful.