There have been many encodings of [indexed] inductiverecursive types, but as I developed Spire I came to desire an encoding that satisfied all of the following criteria:
Desc
) datatype enforce structure that is as
similar to a higherlevel constructor telescope declaration as
possible. From the descriptions literature, this means extending the
propositional encoding [McBride]
rather than the (albeit more general)
computational encoding [Dagand].
From the IR literature, this means staying away from the more semantic
encodings described by
Ghani & Hancock,
as well as subtle differences from the original encoding by Dybjer and
Setzer encoding, as reviewed by
Malatesta et. al..I’m going to start with descriptions for nonindexed IR types, and move to indexed IR types in the subsequent section. Additionally, I will mainly focus on the definition of descriptions, and encoding types with descriptions. The harder part is defining the fixpoint for encoded types and the corresponding induction principle, which I briefly gloss over at the end of the post. Additionally, all the code presented in this post is linked to in the conclusion.
A common use of IR is to
model a dependently typed language. Below is the definition of an
example language. The datatype definition (Lang
) is mutually defined
with an interpretation function whose domain is Set
. Because the
codomain is Set
(rather than some small type like ℕ
), this makes
it a large inductiverecursive definition.
1 2 3 4 5 6 7 8 9 10 11 12 

I will encode Lang
using the following description datatype
(Desc
). Desc
is parameterized by a datatype O
, which is the type
of the codomain of the IR interpretation function.
It is easiest to think of the constructors of Desc
as the pieces
used to form the telescope of a constructor type declaration, such as
Two
and Pair
in Lang
.
1 2 3 4 5 

End
ends a constructor declaration, and specifies what the
mutuallydefined IR interpretation function returns for the
constructor. Rec
encodes a request for a recursive firstorder
argument, and the remainder of the telescope may depend on the
interpretation function applied to this recursive argument.
Ref
encodes a request for a recursive function argument. To remain
strictly positive, Ref
asks for the type of the domain of the
function argument, and the remainder of the telescope may depend on a
function from a value of the domain of the function to the
interpretation function applied to the result of the recursive function
that Ref
encodes.
Arg
records an ordinary argument by requesting the type of the
argument, and the remainder of the telescope may depend on a value of
that type.
Below is the encoding of the the Lang
datatype, and its
interpretation function, as a description.
1 2 3 4 5 6 7 8 9 10 11 12 

The Two
constructor takes no arguments, and its interpretation
function returns the Bool
type, and the Zero
and One
cases are
similar.
The Pair
constructor first takes a
recursive argument, so the rest of the telescope can depend on its
interpretation. Then, it takes a recursive function argument whose
domain is the interpretation of the first argument. Once again, the
remainder of the telescope may depend on the the function from the
requested domain to the interpretation function result. Finally, the
constructor declaration ends by saying that the interpretation of
Pair
as a whole is the sigma type (Σ
) that we expect. The Fun
and Tree
constructors are encoded in the same way that Pair
is.
Also, I sprinkle Lift
and lift
in the right places to make the
universe levels work out.
Another standard example of a small inductiverecursive definition
is an interpreter of an arithmetic language, as presented by
Malatesta et. al..
The idea is to express mathematical sums and products, where the
domain of the sum or product specifies the bound on the iteration. See
the paper for an in depth explanation, but below is the highlevel
code for the Arith
type.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 

Note the that it is a small IR type because the codomain of the
interpretation function is a small type (ℕ
). Below is the Desc encoding,
which is very similar to what Malatesta et. al. give.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 

Both the Lang
and Arith
types had interpretations whose codomain
was some constant value (Set
and ℕ
respectively). More generally,
an inductiverecursive type may be indexed, and the codomain of the
interpretation function may also be a type family. For example, you can
imagine a type like Type : ℕ → Set
where the index type is I = ℕ
,
and an interpretation function eval : (n : ℕ) → Type n → Fin n
where
the interpretation type is O = Fin
.
Below is the description type Desc
modified to be able to encode
indexed inductiverecursive definitions. It now takes an
index type ((I : Set ℓ)
) as an additional parameter to the
(now dependent) interpretation type ((O : I → Set ℓ)
).
1 2 3 4 5 

The new Desc
is a mixture of the old one (which is a slightly modified
DybjerSetzer IR type) , and the indexed description by
McBride.
Recall that descriptions encode constructor argument telescopes. At
the end of a telescope (End
), we must now specify what index the type is
at, as well as the (dependent) value that the interpretation function
returns for the constructor being defined. A recursive (Rec
)
argument requires the index at for the requested recursive type, and
the remainder of the telescope of arguments may use the (dependent)
result of the interpretation function. A recursive function argument
(Ref
) requires the type of the domain of the recursive argument, an
index value (I
) assuming the requested argument (A
), and the rest
of the constructor arguments telescope may depend on a function from
the requested argument to the (dependent) interpretation function
call. Finally, requesting an ordinary argument (Arg
) is just as
before.
So far we have only looked at descriptions, but they are relatively
useless and easy to get wrong if you haven’t defined the corresponding
fixpoint type (which interprets a description of a type as an actual
type). First let’s look at El
, which interprets a description as a
functor between indexed families of sets.
1 2 3 4 5 

El
gets an additional argument (Y
), representing the interpretation
function of the described datatype. To define El
, we must use Y
wherever
we need to get the description from the codomain (B
) of a recursive
firstorder (Rec
) or higherorder (Ref
) argument.
More interestingly, we can now define the fixpoint datatype μ
.
1 2 3 4 5 6 7 8 9 10 11 12 

I tried to define this a couple of years ago but stuck for some
reason. I tried once again a couple of days ago and arrived at this
definition. In the IR literature El
is
sometimes referred to as
⟦_⟧₀
, and a more general version of foldsO
is referred to as
⟦_⟧₁
. However, to get foldO
to pass Agda’s termination check we must inline a
mutually defined specialized version of ⟦_⟧₁
, namely foldsO
. This
is basically the same trick used to get the elimination principle
ind
to terminate by inlining hyps
. I remember trying to reuse this
termination technique a couple
of years ago and failing, but anyhow there it is.
Finally, for the sake of completeness below is the adapted definition
of Hyps
(a collection of inductive hypotheses), ind
(the primitive
induction principle for described types), and hyps
(a specialized
mapping function to collect inductive hypotheses).
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 

Well there it is, indexed inductiverecursive descriptions that satisfy all of my goals stated at the beginning of the post! The beginning of the post was background on nonindexed IR encoding. However, the useful bit of the code to reuse is in the previous section, which includes the universepolymorphic, indexed, inductiverecursive, constructoresque descriptions, as well as their primitive introduction and elimination rules.
The code used throughout the post is linked below:
If you want some homework, try inventing your own indexed inductiverecursive type, and encode it both in Agda and as a description. You can also borrow a type to encode from Dybjer & Setzer’s paper on the topic.
]]>My desired features for a bare bones version 1 of Spire include:
Rather than implementing this all at once, certain features are gradually being added. However, it is still a good idea to have future features in mind when implementing something, and the Agda model helps with that. The process goes like this:
Elaboration of a high level term can involve many steps that are individually easy to follow, but produce a complex final term, and it is worth considering alternative core type theory constructs to produce simpler final terms. These sorts of beforeandafter pictures, and most concepts in this post, can be found in Pierre Dagand’s thesis.
All of the code from this post can be found in Spire. Additionally, each code snippet contains a link to the specific file in the top right corner.
When first implementing the Desc
ription technology, it will be
convenient to have a sufficiently complex example to typecheck. The
following standard sequence of types and functions suits this goal.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 

This sequence of functions has the nice property that each function
builds upon the previous ones (either in its type, value, or both),
ending in the definition of concat
. Furthemore, concat has
a moderatley complicated dependent type, but only eliminates type
families applied to a sequence of variables. Eliminating type families
applied to expressions built from constructors requires more clever motive synthesis
(via Eliminating Dependent Pattern Matching
by Goguen et. al.) that I would like to ignore for this first pass.
Translating these basic functions into eliminators is straightforward. Because we only eliminate type families applied to a sequence of variables, the branch functions supplied to the eliminator look like pattern matching, and the whole definition is rather compact.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 

Now we will consider Desc
riptions as they appear in
Dagand’s thesis,
which are the core type theory analogue to surface language datatype
definitions.
1 2 3 4 5 6 7 8 9 10 11 12 13 

A well known isomorphism exists between sums and dependent products
whose domain is some finite collection. To encode a type such as ℕ
,
we can use a Σ
whose domain is an index into an enumeration of the
contructor names zero
and suc
.
1 2 3 4 5 6 7 8 9 10 

If you’ve been reading carefully, you noticed that μ
did not take a
Desc
ription as an argument, but a function from the index to a
description. Certain type families can be defined computationally (as
functions from their index), as in
Inductive Families Need Not Store Their Indices
by Brady et. al. Eliminating functions defined in this style leads to
particularly nice reduction behaviour, buying you free equations
thanks to definitional equality. ℕ
was not indexed, but below is an
example of defining Vec
as a computational description.
1 2 3 4 5 6 7 8 

Rather than using a standard eliminator on datatypes defined using
descriptions, the special ind
elimination rule is used. An
eliminator has separate “branches” for each constructor of a datatype,
along with proofs of the motive being satisfied at recursive positions
in the constructor. Intead, ind
has a single branch (called pcon
below) that bundles up all branches of a typical eliminator, along
with an All
argument for all recursive motive proofs.
1 2 3 4 5 6 7 8 

Using this eliminator we can define our running example of function
definitions. Here we use ind
rather than pattern matching. The
anonymous function argument represents sugared “{}” syntax from
Dagand thesis Example 3.19
. Additionally, the arguments bound in each
constructor pattern match clause are desugared into projections on the
right hand side. We will see what the final desugared terms look like
later in this post.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 

Definitions using computational descriptions (like the ones above) are
nice because you can pattern match on the index (e.g. ℕ
) and the
description for the type family (e.g. Vec
) definitionally unfolds.
However, things get a bit clunkier once we wish to support named
constructor arguments. Notice that we defined ℕ
with an enumeration
for the constructor arguments, but we did not do the same for Vec
.
Example 7.46
in Dagand shows how to elaborate Vec
into a description
that has named constructor arguments. This involves first wrapping the
description in an elim
constructor to identify Vec
as a type defined
by computation over its index. Then, the zero
and suc
branches
return zero
and suc
constructor tags respectively. In this case,
the constructors index into singleton enumerations, i.e. elim
into
[elim]
, zero
into [zero]
, and suc
into [suc]
. If we were
defining a type that had multiple constructors with the same index for
a particular index branch then the enumeration would not
be a singleton, but it would still only be subenumeration of the
total enumeration of constructors that we have in mind for the type.
In contrast,
the ℕ
description tag constructors both belong to a more natural
enumeration, i.e. zero
and suc
index into [zero, suc]
. Hence,
although functions like append
and concat
defined above over
computational description Vec
look nice, once you add these
singleton tags and desugar everything, you get lots of eliminations
over singleton enumerations that are IMO no longer as elegant.
Additionally, type families defined by computation over the index are
only a subclass of all possible type families. The remaining types
(and actually, all type families) can be alternatively defined by
constraining the index with a propositional equality proof. See Dagand
Example 7.45
for how to define Vec
this way. This type of definition
keeps the more natural enumeration of constructor tags. I will call
types defined this way “propositional descriptions”.
Although computational descriptions give you an additional way to
define types, in practice once you add named constructors and perform
elaboration of patterns to eliminators, I don’t feel like they buy you
enough for the additional complexity. I am content with supporting
Agdastyle propositionally defined datatypes exclusively. Given this
decision, we can change the grammar of descriptions to more
closely resemble the surface language Agdastyle datatype
declarations. I saw something like this alternative Desc
definition
from the code accompanying a
blog post
by Guillaume Allais.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 

This description grammar enforces descriptions to look like what we
are used to seeing in datatype declarations. For example,
Rec/Arg/RecFun
, corresponding to the previous X/Σ/Π
constructors,
take an extra
description argument at the end. Then End
, formerly ⊤
, ends the
“constructor” with an index value. The interpretation function uses
this index value to ask for a propositionally equality proof, making
sure that the index of the constructor you produce matches the index
of the type you specified. This can be achieved in the previous Desc
grammar by ending a description with Σ (x ≡ y) λ _ → ⊤
, but here
that pattern is internalized. One pleasant consequence can be seen by
looking at the μ
datatype. It no longer requires a function from the
index to a description, and now merely requires a description. Because
we no longer support computational described datatypes (instead
describing them all propositionally), our descriptions can be
firstorder rather than higherorder. The more firstorder your
descriptions are, the more
fully generic programming
you can do over them.
The ℕ
datatype is declared pretty much the same as before. However,
Vec
is now given with its constructor names, and the index of a
particular constructor is given at the end of the sequence of
constructor arguments. Compare this to the Agda data declaration at
the top of the post and notice the similar structure.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 

Our function definitions for add
and mult
are pretty much
unchanged, but append
and concat
have one significant difference.
Both Vec
constructors do a dependent pattern match on a
propositional equality proof. However, this is once again a rather
simple dependent match that can just be elaborated to uses of
substitution. Specifically, this elaboration is the solution
step of
Lemma 16
in
Eliminating Dependent Pattern Matching.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 

I will now show the desugared final forms of the propositional description code given so far. It is very important to study these terms carefully, as they are the terms of our canonical type theory and will appear as types everywhere throughout our language (because types are fully evaluated terms).
The first bit of sugar we will get rid of has to do with the pattern
matching we have been performing on finite enumerations of tags
(representing constructor names). The enumeration Enum
will be a
list of strings. Tag
is an index into Enum
(like Fin
into ℕ
).
Cases
is a finite product of values in a type family indexed by each
Tag
(like Vec
with type family values indexed by ℕ
). Finally,
case
eliminates a tag by returning the value in Cases
at that
position/name. I’ve renamed these contstructs, and their original
names in Dagand are EnumU
, EnumT
, π
, and switch
and can be
found in Definition 2.49
and Definition 2.52
.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 

In the sugared version of descriptions for datatypes we match on a tag
and return a description for it. In the desugared version, we instead
eliminate a tag with the special case
elimination rule.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 

Now brace yourself for the rather wordy desugared version of our
series of function definitions. The general pattern for these is that
we first use ind
on the datatype being eliminated (like before),
then we use case
to give a branch for each constructor. Because
case
eliminates a tag out of the domain of a dependent pair,
we must use the
convoy pattern
to have the codomain properly unfold. As mentioned before, “matching” on
our propositional equality proof is done by applying subst
. Finally,
each argument to a constructor is referenced by its projection out of
the tuple of arguments you actually get out of the constructor.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 

Alternatively, rather than defining these functions with ind
,
case
, and subst
directly, we can define the eliminators using
ind
and reuse our former definitions that use eliminators.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 

We have already seen how large the desugared code gets in the previous
section. Unfortunately, if you evaluate this code to canonical form it
gets much bigger! For example, the
canonical term
for concat
defined
using ind
is 2,195 lines long! This is a huge term, considering
the
surface language source
for defining the types and values of add
, mult
, append
, and
concat
is 14 lines long (the line count above only accounts for
the value of concat by itself).
Some negative consequences of large canonical terms include:
Some of this explosion in size comes from using eliminators instead of
dependent pattern matching, where the motive must be supplied
explicitly. Some more comes from the fact that a μ
representing a
type is applied to its description, which may be large, so anywhere
the type appears the whole description is duplicated.
I haven’t thought that much about solutions to this large term problem yet, but I can imagine a few. I don’t want to perform implicit argument search and unification in the canonical type theory because it complicates it too much. However, the current canonical grammar is already broken up into values and spines. This allows for some bidirectional argument synthesis for values, but not for elimination rules. Breaking up the grammar further would allow for synthesis of arguments to elimination rules too, and the canonical type checker would remain relatively simple. Here is a file that adds some implicit arguments to the definitions presented thus far that I believe could be synthesized. I didn’t try that hard, so there is more room for making things more implicit, but that at least takes the line count down to 1,411.
An interesting thing to notice is that the way elimination proceeds
when writing definitions with ind
, case
, and subst
is rather
uniform. All definitions of datatypes can already be characterized as
codes of a universe called tagDesc
in Dagand Definition 4.23
.
Dagand programs over this universe to perform generic programming. One form
of that would be a specialized indcase
definition that automatically
applies ind
, then
case
, then maybe subst
too. Ideally, I would like a generic
elim
function that computes the exact type signature expected from
standard eliminators from each description. This basically involves
additionally doing some currying/uncurrying to pack/unpack values out of the tuple
produced by the interpretation function for descriptions. As we have
seen, programming definitions using the interface exposed by
eliminators leads to
pretty short code.
The motive is still there and descriptions are still duplicated in
every occurrence of μ
, but we need to win this war by winning many
battles. However, even if you programmed this generic elim
function
within the language, the canonical term it produces would be just as
big. It may be necessary to add elim
as a canonical term primitive
instead, and we may not even need the more general ind
or case
if
our language never produces code in the more general form where ind
or case
would be necessary.
Another technique might be to avoid expanding definitions where possible. For example, if you have a unique hash represent the description for a type, then testing the equality of terms using the same hash value would work. However, I would need to be careful to evaluate that hash to a concrete term during type checking, a form of lazy evaluation. Memoization of previously typechecked terms, and equality comparisons, would help a lot too because there are a lot of duplicated terms.
Using a tag that indexes into an enumeration as the domain of a
dependent pair type is isomorphic to using a Sum
type. This
alternative approach is taken by Bob Atkey in
Foveran. Well, the isomorphism
is almost there. A tag for an enumeration lets you have named sums,
used because we care about the name of our constructors. There are a
variety of ways to accomplish this with sums, like making a new
sumlike type, or always making the first type of a sum be a
labelled type
(see
The view from the left
) and ending the chain of sums in ⊥
on the right. This would still
allow you to perform generic programming by having a list of tuples of
strings plus descriptions act as the universe of codes
(just like tagDesc
), which gets interpreted as a description,
which gets interpreted as a sequence of sums of the form that I just
described. There are a number of pros and cons between the tag and sum
approach that I should study more closely, but I think a lot of the
duplication issues would come up in either case.
That wraps up this week’s blog post. I think a weekly schedule is good for this kind of development blog, as it gives me enough time to come up with material worth blogging about and enough time for interested readers to keep up.
]]>My friend and colleague Nathan is also hacking on it nowadays. The goals of the project are as follows:
For the most part, this comes down to implementing selective pieces of Epigram. More specifically, we’re following a lot of what Pierre Dagand has wonderfully described in his thesis.
I experimented with a few different project structures, and my favorite ended up using a canonical type theory for the core along with hereditary substitution. I learned about hereditary substitution from a beautiful paper describing how to formalize termination of evaluation for the STLC by Keller and Alternkirch. The technique originally came out work on the metatheory of LF/Twelf. Martens and Crary have a nice modern paper on how to use this to formalize LF in LF.
In a canonical type theory your terms are grammatically enforced to be in βnormal form. This has a number of desirable consequences, for instance:
Because so many algorithms in a dependently typed language take or return redexfree terms, having a canonical type theory makes things easier and less error prone. Canonical terms are no longer closed under substitution, hence hereditary substitution evaluates as it substitutes to remove redexes.
I originally tried to implement canonical Spire in Agda, extending Keller and Altenkirch’s typed STLC terms to dependent types. In Twelf the canonical type theory even makes termination of hereditary substitution immediate. However, Twelf’s termination argument does not easily extend to dependent type theory with large eliminations (functions that return types). Even if you turn off the termination checker, getting the rest of the definitions to type check is difficult because of the multitude of mutual definitions. Here is a file that postulates the hereditary substitution function, but still defines all the other semantic functions that would be used when implementing hereditary substitution. Somewhere on my hard drive exists a stuck attempt at defining substitution where I ran into trouble with all the mutual definitions.
In any case, making the typedsyntax above pass type checking would still leave the open problem of termination. Instead, Spire is implemented in Haskell with untyped syntax and a partial (monadic) hereditary substitution semantics. I’m working on trying to prove termination of this untyped semantics in Agda, which is easier due to not having so many mutual definitions, and many future blog posts will cover my slow progress on that front.
Ah yes, so the project structure of Spire. The idea is to have a surface syntax of highlevel constructs and expressions that elaborates to the closed core type theory. This idea is the hallmark of Epigram. Examples of this include Conor McBride’s work on compiling dependent pattern matching to eliminators, and Pierre Dagand’s work on compiling data declarations to descriptions. Dagand’s thesis describes this process well. A minor difference is that we are elaborating to canonical terms rather than a core theory that includes expressions. The metatheorem for soundness of elaboration described by Dagand corresponds to type preservation: After elaboration of a welltyped surface term, you get a welltyped core term. In the eventually formalized Spire canonical type theory, this will be proven. In the current Haskell version, this is dynamically checked rather than proven.
Spire is currently split into 3 languages. The top language called
Surface
is what the user programs in. Elaboration from surface proceeds to
Expression.
Expressions are like surface terms, but contain a twopart bidirectional
syntax. Other syntactical elaborations (those that need not be type directed) will also be performed here
in the future. Nathan has been working on the implicit arguments front
of the project, and Spire currently supports wildcard arguments.
Elaboration from Syntax to Expression also removes wildcards and
introduces metavariables into the context. In the future Surface will
contain other highlevel constructrs like data declarations and
pattern matching syntax. Eloboration proceeds from expressions to
Canonical
terms. This performs type checking, introduces and solves unification
problems, and removes βredexes. To keep the canonical terms smaller,
we only require them to be checkable rather than
inferrable/synthesizable. Canonical terms can be checked
bidirectionally, as they are already grammatically split into Values
and neutrals/spines. This works so long as every eliminator only
eliminates something inferred and the rest of the arguments are
checkable. A counterexample of this is the if
statement helper in
Expression. Therein the types of the two branches need to be inferred
rather than checked, but if
appears as a specialized elimBool
in
the canonical theory (once we have full implicit argument support,
if
can go away in expressions too, but this is a nice example of what
would break bidirectional canonical type checking). Finally,
there are embedding functions to go back up the chain of languages. Another
metatheorem
that appears as a dynamic check in Spire is: if you evaluate a
welltyped term to a canonical, then embed it back up to a surface
term and evaluate it again, you get back the same canonical term. This
becomes more important as the surface and canonical languages diverge
more. Embedding is used to prettyprint canonicals after evaluation,
for example in error messages. Although messing up embedding does not
affect consistency, as a practical matter it would confuse the user if
type errors contained the wrong terms due to embedding bugs!
The implicit argument work by Nathan (I’ll ask him if he wants to describe it) was inspired by Asperti et. al.’s Matita work and somewhat by Edwin Brady’s Idris work. It relies on the higherorder unification algorithm and library by Gundry, and currently translates between Spire terms and Gundry terms.
With something like NbE in Lambda Pi you get to inherit binding structure and substitution from the metalanguage. In Spire binding must be implemented directly, but thanks to the Unbound library a lot of this can be automated. We also added a monadic extension that makes it possible to use Unbound for hereditary substitution, used like this in Spire.
My near term next steps are going to be adding Desc
riptions and implementing Dagand’s data
declaration elaboration work, as well as working on trying to
formalize termination of Spire’s canonical theory. We’re leaving
typeintype in the Haskell code for now, as I’m
comfortable enough
with universe hierarchies now to add them later (famous last words).
That’s it for now folks, see you tomorrow with the start of a nice new
term. Also, Nathan and I have started to idle in #spirelang
on
freenode
, so feel free to idle alongside us.