This is the fourth entry in my series on type theory. We will introduce a lot of notation that is reminiscent of set theory, to make everyone feel more at home, and then we’ll discuss the axiom of choice.

Subtypes

Our next topic is “subsets” within types. I mentioned in the beginning that we can define “subsets” within types via predicates. This usually works great, but sometimes the “subset” needs to be more type-like and then a predicate won’t do.

As an example, say we have the following predicate on the natural numbers

with the type  (where  represents all truth values). Then in order to claim that  is an even number, we write . If you were really into using the membership symbol , then you could also write this as “” and as long as you make clear what is meant by that (namely, ), this is well-defined.

However, we can’t do something like “” with our predicate, because you absolutely need a type there in the quantifier. So what do we do when we want to quantify over all even numbers?

No problem, we’ll just use the the same trick we used to define the subuniverse . With a sum type, we can define a type which basically corresponds to the subset defined by the predicate: 

What do the elements of  look like? They are pairs where the first element is a natural number  and the second element is a witness element  for the fact that  is even: . There can be no  in  where  is not even because then there wouldn’t be a witness. Likewise, as long as you can prove  for some  (with witness ), then  is clearly an element of the type. So, the elements of the type  aren’t directly the elements of the subset of even numbers, but there is a very obvious mapping between the two (namely just throwing away the witness). In fact, we will often pretend that the elements of  are just the elements of the subset defined by the predicate. And now we can quantify over the even numbers: .

Types that have been constructed from some base type in the way we constructed  here from  (i.e., with  and a predicate) are called subtypes of the base type. Thus,  is a subtype of . If the base type is a universe , we’ll say subuniverse instead of subtype, but the mechanism is exactly the same.

As this is a common construct, there is a special notation for it, but we only use it if the expression after the  symbol is a predicate, i.e., a function that returns a truth value/subsingleton, :

This is of course simply set-builder notation. With this, we could have defined the subtype  as 

Note that in set theory, the axiom of specification is needed to be able to construct subsets from predicates, but in type theory, we can just do it with sum types, as long as we encode logic the way we did. (Another good reason for propositions-as-types.)

Set operations

At this point in the HoTT book, the authors define set operators, like union  or intersection , on these subtypes, because we want to do all kinds of mathematics and those operators are often very useful. For example, here is how they define inclusion:

So, a subtype is included in another subtype if the underlying predicate of the first subtype implies the predicate of the second subtype. This definition works because subtypes are backed by predicate functions and so we can define these set operations in terms of operations on predicates.

However, defining these set operations as acting on subtypes seems to me like trying to be a bit too clever. One problem is the type signature of the operators. We could write it like this:

but then how to define  – the type of all subtypes on ? I think the definition would be this

but this does not look like a nice definition at all. This seems like a hint to me that the subuniverse of subtypes is not a natural concept, and we shouldn’t define functions on this abomination.

The second problem I see is that defining these operations on subtypes risks blurring the distinction between types and objects with set-like semantics. For example, you might accidentally use  on a type which wasn’t a subtype, because it might sometimes be hard to tell the difference.

For these reasons, I would define set-operators (and relations) like “” on good old predicate functions:

In words: in the domain , property  is a sub-property of  if, for all objects in the domain,  is true whenever  is true. This definition is concise and easy to read.

The goal is here to make set-like mathematics (which uses relations and operators like ) less cumbersome when done on top of type theory. Working with these predicates in type theory should feel as natural as working with sets in set theory.

So, let’s define more set operators and relations. We already mentioned “” above:

And as a reminder, we can also write “” (power set type) instead of “” to make the type signature even more concise.

The following operators return a new predicate, so we have to use anonymous functions to define them (I skipped the type signatures and type subscripts this time):

As is usual with these operators, we’re writing them in infix notation.

But, as nice as working with predicate functions is, if you want to quantify over elements, you still need a (sub-)type! You have two choices here:  or . Both are formally correct, but we will prefer the latter.

(If I could suggest another notation, it would be this: , which reduces the notation to only the essential symbols; such that we can write , but I’ll try to keep the notation pretty standard here, so I won’t be using this.)

If  is a predicate on predicates on  (“a set of subsets”), we can also define arbitrary, non-binary intersection and union on it.[1] For once, I will be pulling out all the stops to make this look like set theory (even though it’s type theory in disguise!):[2]

and don’t panic – the square brackets  mean exactly the same as round brackets . It was just getting a bit hard to parse with only round ones.

There are other beloved set theory notations that can be “rescued” by giving them new semantics in type theory. For example, if you have  then you could write  to refer to the subtype . But I will stop here now.

To summarize again the difference between subtypes and predicates: if you’re working in a fixed base set (like  or ) and you want to use set-like operations, use predicates; if you are in a context where a type is needed or you only expect to do “type-level operations” (like quantifying over elements), use subtypes. In the following, we will see many examples where subtypes are very useful.

Construction of other numbers

Mathematicians don’t only want to work with boring old natural numbers, so with the power set type and subtypes in hand, we can also construct other numbers, like the integers  and the rational numbers . They can be defined as equivalence classes on binary product types like . I’ll be very brief in this section; it’s mostly just to show that it’s possible.

So, let’s first define equivalence classes. If  is some base type and  is a relation on  with type , then the equivalence classes of  belong to the following type (which is also called the quotient of  by ) :

(I wrote “” here to mean “”.) As you can see, this is a subtype of the power set of . Each element of  is one equivalence class, which is a “subset” of  whose members are related according to .

For the integers , we define a relation on the product  with the following definition: 

 such that 

where we can choose the canonical forms  and  for the equivalence classes, corresponding to  and  respectively.

To get from the integers to the rational numbers, we take  and use this relation: 

The  is there to prevent division by zero; we could have also just used the natural numbers without 0 instead. With this we have 

A pair  corresponds to the fraction . The canonical form of a rational number is a fraction that can’t be reduced any further (“fraction in lowest terms”) – every rational number has a unique representation as an irreducible fraction with a positive denominator.

Constructing the real numbers is a bit more involved but it basically works like it usually does; for example via Dedekind cuts, which we can represent by two predicates on . So, let  and  be two predicates (aka “subsets”) on . Then if  is a predicate that tells us whether the pair  is a Dedekind cut (you can look up the definition on Wikipedia), then the (Dedekind) reals are given by the following: 

The usual operations and relations for the real numbers (e.g.,  and ) can then be defined on this type.

We can also define restrictions like , the positive real numbers, by defining a subtype with a predicate like . The very common “” from calculus can then be written as ), but I suppose you could also write .

The family of finite types

To round out this section on numbers, I’ll quickly comment on a type family that I’ve already briefly mentioned before: the family of finite types. As a reminder, a family of types is a type-valued function from some index set  to some types: . A very simple type family  with the index set  would be, for example, .

The family of finite types is indexed by  and can be defined like this: 

The essential point is that  has exactly  elements. And if  means that types  and  are equivalent/isomorphic (see the next post for a rigorous definition of that), then of course we’ll have[3]

The type family  is very useful when you want to iterate over a finite index. For example, instead of 

 you can just write 

The family of finite types is generally very handy for indexing things; for example other type families: . However, finite types can also be used to index vectors! Now, usually, vectors are realized as tuples, like  (or ), but there are some advantages to realizing vectors as “functions” on finite types, . Vector entries are then accessed by function application, , where . This is how the dex programming language formalizes arrays, for example. (Though these “array functions” are memoized for performance.)

The advantage is that it allows you two write functions that work with generic vector sizes while still being type-safe (or rather, shape-safe). For example, a concatenation function , which concatenates two vectors of arbitrary length, would have the following type:

where  and  are arbitrary index types and  is the binary sum type. If we now pass in two vectors,  and , then the result has the type , which isn’t quite , but the two types are isomorphic. So, we can read off from the types that the result is a vector of length 6.

More examples of subtypes

When we introduced sum types, we used the type of all groups as an example. The type we gave was 

but we can actually be more precise than that. As this definition is written, an element of  is just a type with some binary operation and some element, but those do not necessarily satisfy the group axioms! For example, in a group, the binary operation needs to be associative and the special element needs to be a neutral element (with respect to the binary operation), but the type doesn’t guarantee that! To solve this, we can filter for those elements that satisfy the group axioms with set-builder notation:

 where 

(As usual, the binary operation is written in infix notation.) Technically, an element of this new  type will not be the triplet , but the 4-tuple  where  is the witness for , but, again, we’ll usually ignore this.

If you now want to make statements about an arbitrary group, you can just say, “let ” and then derive clever things about .

As another example of subtypes, consider the type of continuous functions between  and . The plain function type is , but if we have a predicate  which tells us whether function  is continuous, then we can narrow down the type:

 is then a type family indexed by  and . Of course, if the definition of “continuous” is clear from the context, then you can just informally write

Axiom of choice

As mentioned before, the law of excluded middle (LEM) is actually implied by the (type-theoretic version of the) axiom of choice. So, let’s talk about it!

In set theory, one common way to state the axiom is this:

For any set  of nonempty sets, there exists a choice function  that is defined on  and maps each set  to an element of that set .

As always, a set is a predicate in our theory. So, for a set of sets, we need multiple predicates – a family of predicates.[4] We can index those predicates with an index type . If the predicates are on the type , then the indexed predicates are given by

a function  with two parameters (we used currying): an index  (to index the predicate) and an element . This may look more like a relation than a predicate, but the first element is really only there for indexing; we could also write it as “” (or even “”) instead of  (but  is more standard, so we’ll use that).

So,  plays the role of the set of sets –  in the above definition. Now, how to ensure the sets are nonempty? We will need to assume ; for all indices , there exists an  such that the predicate holds.

Finally, what is the thing we want to assert? The existence of a function  which takes an index  and returns an element of  for which the predicate holds: 

Translated: “if all the sets in the family  are non-empty, then a function  exists, which, for all sets in the family, picks an element of the set.”

However, we can make this slightly more general by allowing that the sets don’t all need to come from the same base type . So, one set could be from  and another from  but there should still be a choice function that picks an element from all of these. In order to allow this, we have to replace the fixed type  with a type-valued function  that may return different types based on the value of . Then we get:[5]

where all the ’s are subsingletons. We can see that  needs to belong to a dependent function type in this more general setup.

And that’s the axiom of choice transcribed to type theory! As with the axiom of choice in set theory, for finite index types  this is actually a theorem that can be proved. (Finite types are those types where an equivalence exists between the type and a member of the family of finite types, , for some . We’ll define “equivalence” later on.) The axiom is only needed for infinite .

So, let’s prove the axiom of choice for the case where  is the unit type  (i.e., it’s very much finite), because we can learn something from that. In this case, we don’t really need to index  and so we can just write 

Now, with “classical” reasoning, this is easy to prove: if there exists an element in  that is in , then let  be such an element; next, we’ll just define the constant function ; surely we’ll then have .

But let’s check how this works with the underlying types:

It kind of looks like we are given the pair  where  and  is the witness for , and then it seems like we can again construct  as the constant function  and use  as the witness for .

However, we don’t actually get the pair ; we only get the double negation of it:  and we can’t apply DNE to this because the inner type might not be a subsingleton. So what can we do?

To be clear, the “classical” proof above is correct. But in order to understand why it is correct in the underlying type system, we’ll need to understand more about the operation we wrote as . We implemented this with double negation, but there are actually other ways to do this, and so this operation has a more general name: propositional truncation. It is called that because it “truncates” a type to a subsingleton, which are used for propositions. We will later get to know other truncations as well.

Propositional truncation in detail

The core properties of propositional truncation are these:

  1. If some type  is inhabited, then so is . Furthermore,  is a subsingleton.
  2. If  is a subsingleton, then  is inhabited if  is inhabited.
  3. If  is a subsingleton and we have a function , then there exists a function  such that  for all  (where  is the singular element of ).

The third property means that if the inhabitedness of  can prove the proposition , then the inhabitedness of  is already sufficient to prove it. This is very powerful. It means that if you know some  is inhabited, and you want to prove the proposition  (which is a proper subsingleton), then you may just assume you have an element of  and if that’s enough to prove , then  also follows from just . This is a way to actually prove things from .

So, let’s convince ourselves that double negation (with LEM/DNE as an axiom) actually satisfies these properties. We already showed properties 1 and 2 when we introduced the notation , so we only need to show property 3. This will be the last real proof in this series.

So, say  is a subsingleton and we have an . Our goal is to construct . As a reminder:

We cannot apply DNE to this because  might not be a subsingleton. In order to construct , we will first construct , because we can easily get  from this via property 2 (i.e., just apply DNE to ). Expanding the propositional truncation, the type of  is: 

If you look carefully at that type and also consider the type of , you can come up with the following:

 where “” is the function composition of  and , defined like this:

We then get  by composing  with , the double-negation-elimination function (for the type ): 

What’s left to show is that . But this is trivial because both  and  are in  and  is a subsingleton where all elements are equal to each other. So, this proves property 3 for the truncation based on double negation.

Axiom of choice for a single set

Now that we have a better understanding of propositional truncation, let’s look again at our very finite (single set only) axiom of choice: 

According to property 3 of propositional truncation, we may actually prove this instead: 

This in turn is implied by 

because by property 1 of propositional truncation,  always implies .

So, we may use our naïve proof from above after all: From  we can construct an element of the codomain like this: . It’s then left to show that  is actually an element of . By the rules of function application,  is replaced by , so , which concludes the proof.

In order to prove the axiom of choice for other finite sets, we can use induction over .

I’ll leave it to you to think about why this proof fails for infinite index types.

Proving LEM/DNE from AC

The proof showing that the axiom of choice implies LEM/DNE is quite complicated so we’ll only discuss the requirements here. In addition to the axiom of choice, the proof also requires propositional extensionality and function extensionality which we’ll discuss in the next article.

One tricky aspect is that we of course can’t assume LEM/DNE for the proof (because that’s what we’re trying to prove) so we can’t use double negation to implement the propositional truncation that is necessary to formulate the axiom of choice. One possible solution to this problem is to simply assume a propositional truncation operation with the 3 necessary properties, as an additional part of our type theory, simply as a black-box operation.

Another solution is to use the following for propositional truncation: 

however, this only works if the set  exists – the set that indexes all truth values/subsingletons. We mentioned before that  is guaranteed to exist under LEM+propositional extensionality, but, again, we can’t assume LEM, so we would instead need to explicitly assume the existence of  as an axiom for this encoding to work.

As we will later encounter other truncations though, I think it makes most sense to simply assume propositional truncation as an additional element of type theory.

Finally, is there some intuition for why AC (axiom of choice) would imply LEM? Well, you could actually view LEM as a sort of mini-AC. AC allows you to pick an element as long as you can prove that there are elements there (i.e., that the subsets are all non-empty). And similarly, LEM/DNE allows you to pick an element of  as long as you can prove that  is not inhabited. In a way, both allow you to conjure up an element out of nothing, as long as you have sufficient reason to believe that an element exists.


In the next and final installment of this series, we will, at last, find out what the deal is with extensionality. We will also discuss the celebrated univalence axiom.

  1. ^

    We can also define these for an indexed family of predicates, :

    and similarly for intersection.

  2. ^

    Note that if  is “empty” (i.e., false everywhere), then the intersection of it will be all of , which might not be what you want. So make sure  isn’t empty!

  3. ^

    Why use  when you can just use ? The problem with the “” notation is that it stops working for higher numbers, like . I think “” would be very confusing as a type.

  4. ^

    Why not a predicate of predicates? Because we later want to allow the predicates to live in different base types, which you can only express with a family. And anyway, you can turn a predicate of predicates  easily into a family of predicates; just use the corresponding subtype, , as the index type. The family is then given by the function  where  is the witness for .

  5. ^

    You could call this the “truncated version” of the axiom of choice in type theory because “” gets translated to . The “untruncated version” with plain sum types  is a theorem in type theory.

New Comment
1 comment, sorted by Click to highlight new comments since:

Practising expressing old interests in a new language:

It is a known system and a terribly recursive one. One can get going by the fact that