I had a snappy line at the end of my discussion of sums and products, so I figured it was as good a place as any to stop. After this post, I’ll need one more post to clean up a few things, and then my mini tutorial on type theory will be over. Today, we’re talking about equality, which is probably the hardest part of Martin-Löf type theory.

**Equality in logic**

When you first learn first-order logic, you learn how to generate a first order language which has:

- Variables
- “Logical symbols”, (,),,,, , , and crucially
*equality*, . - Relation symbols and function symbols. (Also constant, symbols, but these are “nullary” functions).

For a long time it struck me as somewhat strange that equality was included here; after all, isn’t equality just another relation? What makes equality so special? But as I’ve become more familiar with logic and models, and the subtle interplay between object language and metalanguage, I’ve slowly come to understand why equality is often treated as a logical symbol. The very short version is that working with equality *almost always* happens at the level of our logic (that is, the meta in first order logic). In particular, we want a few things to be true:

- We can substitute equal objects for equal; If holds, and , then holds. This (Leibniz Law) seems silly and obvious, but it is something we need to prove, and it’s something we need to prove at the level of the logic (since it deals with formulas) if we use equality as a relation symbol, we need to prove this separately for every theory.
- We want equality to be “the least congruence” on a model. But not all congruences are even
*definable*in a given theory, so it’s not clear that such a statement can be reasonably made anywhere besides the logical level.

The point of all this is to say that if we’re trying to give an account of (mathematical) logic, we really *do* need to include equalities. It was also to point out two important properties of equality:

*Minimality**Leibniz Law*

We will use the first of these properties (minimality) as the basis of our definition, but some features of type theory will allow us to state something more general and without making any appeal at all to congruences.

**Minimality**

One of the curious properties of type theory is that the logic happens at the *object* level. We encode logical prepositions as types, as objects in or system; and proofs as terms, again objects in our system. So, since we will give our account of equality at the logical level, it is also going to be an object: a *type*. That is, for any two terms and , we will have a type .

But hold on: unlike in set theory, we can’t pick objects out of nowhere: to have an object means to have an object of some type. But it hardly makes sense to compare objects of different types, so we have the formation rule for equality (or *identity*, as type theorists usually call it):

*Identity formation*: If and , then is a type. Note that I haven’t said this type is inhabited, only that it exists—thinking in terms of propositions, we’re saying that it makes sense to propose (although it may be false). Sometimes it’s helpful to denote the type at which this equality lives. In that case, I’ll write a subscript, .

Now we need an introduction rule. There’s an obvious candidate: anything must be equal to itself. That is, equality is *reflexive*. This seems like the bare minimum we could ask of equality, so we have

*Identity introduction*: for any , there is a term . When we need to be clear, I’ll put a subscript, .

Next is the elimination rule, which is a little more subtle. Here we have to return to that notion of minimality: we will say that identity is the “least reflexive relation”. If you are a type theorists, you can already guess what the elimination rule will be from that statement. But if you’re reading this, I’m going to guess you’re not a type theorist. As is common in math, we have to express “minimality” in an indirect way. As a first approximation, consider a binary relation on some type . We can view this as a function since it takes two elements of and returns a proposition. Since we can talk about equality, there are two ways to say that a relation is reflexive, which I’ll call “strong” and “weak” (this is my own terminology, so you won’t find anything by looking it up; you’ll probably find something about lattices):

*weak reflexivity*: For any , . That is, every term is related to itself. In typical type theory notation, we’d write this as .*strong reflexivity*: For any , we have . That is, equal terms are related. Again in standard notation, .

The second is “strong” because it implies the other: if is a proof that is strongly reflexive, then (the function taking to the proof that , since we know ) is a proof that is weakly reflexive. Our first attempt at an elimination rule:

*Identity elimination* (almost): if `R`

is a weakly reflexive relation on `A`

, then `R`

is strongly reflexive. Or, more in the spirit of the other elimination rules I’ve given:

Let , and . Then there is .

We’ll skip the computation rule for now; I’ll come back to it when I give the full form of the elimination rule. In the meantime, this is actually enough to do a surprising amount

**Symmetry, transitivity, and a little more**

We can actually use this form of the elimination rule to show symmetry and transitivity.

*Equality is symmetric.*

*Proof*: The statement of symmetry is . Observe that this precisely says that the “flipped identity type” is “strongly reflexive”. So, to apply the elimination rule we only need to show that it is weakly reflexive. That is, . But this is exactly what gives us.

*Equality is transitive.*

*Proof*: This is similar: we want to show . Reordering the arguments, we want to show . So, if we let , we want want to show that is strongly reflexive. But by identity elimination, we only need to show it’s weakly reflexive. That is, we need a function . From here we could eliminate again, but we might as well take the pointwise identity function . (). And so we have that is weakly reflexive, and by identity elimination, we are done. *(aside: we need to be careful when reordering arguments like we did here. We need to make sure that the later arguments don’t depend on earlier ones. Since doesn’t depend on , this is safe.)*

In fact, we can do more than this. Remember earlier I mentioned that a crucial property of equality is the *Leibniz law*: If and , then . We can actually prove this!

*Let and . Then there is a function .*

*Proof*. Take . This is weakly reflexive, since , and so by identity elimination is strongly reflexive. I.e., for all , .

A nice exercise is to rewrite the proofs of symmetry and transitivity using the Leibniz law (instead of using identity elimination directly).

We can do better still: we can actually show that for any function , if , then . Specialized to , this says that not only do we have a *function* from to , but that . The proof is again by identity elimination: we only need to show that for any , , but we have .

Let’s back up a bit: when we write out all of these theorems formally in type-theoretic notation, we see that in each case we have a *function*:

- We have inversion , and we write the inverse of as ,
- We have concatenation , and we write the concatenation of and as
- We have a function which “transports elements of along an equality” () to elements of . We write this function (here and .)
- We can “apply functions to equalities” which we write (or sometimes just ), so .

If I had given you the computation rule, you could show that these have the properties we expect inverses, concatenation, and so on to have. *(technical stuff: were we to prove these properties, we’d have proven that types form groupoids, with equalities as morphisms, and functions as functors. We almost have that type families are fibrations, but we’re not quite there yet.)*

**Dependent elimination**

I said the elimination principle I gave wasn’t quite the full principle. So, what *can’t* we do?

Well, suppose we want to generalize to work on *dependent functions*. When we try to write out this statement, we run into a problem. Let . We *want* to say . But and . *A priori* these are different types. But, we can solve this by transporting: since we have , we can transport along to get some element of . *This* element is what should be equal to . So the full statement becomes

*Given a type , a family , and a dependent function , for any , there is a function .*

Trying to prove this we run into a problem: our function is a *dependent* function. But we’ve only given an elimination rule for *non-dependent* functions. So, we need a dependent eliminator, just like we did for types last time. Morally, we eliminate exactly as we did before, but the type encoding “reflexivity” also depends now on the equality we give it. So, we have:

*Identity elimination*: Let and suppose we have (“ is weakly reflexive”). Then we have such that

*Identity computation*: .

Now, we can construct (almost) exactly how we constructed . We need only construct . But a quick check of the definitions (using our new computation rule) shows us that definitionally. So we can use .

From here there’s a lot we can do. In fact, with the dependent sums and products we introduced last time, we now have all of the “usual” types of dependent type theory. But I think we’ve seen quite enough for one post.