Research and projects
I summarize below my most notable outputs and the recommended up-to-date versions for them. I don’t list all publications here.
Metatheory of quotient inductive-inductive and higher inductive-inductive types
This is the topic of my PhD thesis, titled Type-Theoretic Signatures for Algebraic Theories and Inductive Types. This is the most up-to-date version of this research. My older papers in the same topic are mostly superseded. There is one paper which is not superseded, because the thesis only summarizes it, but does not really improve or revise it:
- For Finitary Induction-Induction, Induction is Enough, with Ambroise Lafont and Ambrus Kaposi
These slides about my thesis might be also intersting.
Efficient evaluation for cubical type theories
Existing cubical type theory implementations all suffer from efficiency problems. I have a WIP repository with a prototype implementation that has major new optimizations.
Staged compilation & metaprogramming
- ICFP 2024: Closure-Free Functional Programming in a Two-Level Type Theory. Paper. Code supplement.
- ICFP 2022: Staged Compilation With Two-Level Type Theory. Paper. Appendix. Implementation.
- Staged push/pull fusion in typed Template Haskell. It’s mostly made obsolete by the ICFP 2024 paper.
High-performance elaboration with dependent types
- smalltt is an implementation of elaboration for a minimal dependently typed language, which demonstrates a collection of performance optimization techniques.
- normalization-bench is a benchmark suite for conversion checking and normalization for pure lambda terms in a number of different languages and runtime systems, with an emphasis on machine-code-compiled HOAS (higher-order abstract syntax). It’s a bit outdated now (implementations and runtime options could be improved), but I think it’s still informative.
Elaboration zoo
This is intended as a pedagogical demonstration of best practice implementations of various elaboration features, in settings with dependent types. It’s far from complete, and pull requests are welcome.
Inference and unification
I’ve done some research on inference for first-class polymorphism. This is related to what is called “impredicative” type inference in the older ML/GHC literature (I explain in the following sources why “impredicative” is a misnomer). I produced the following paper in 2020:
However, shortly after this was published, I found that there is a practically much more compelling solution:
My stance now is that the older source is overkill: it could be in principle scaled to more advanced inference, but in practice barely anyone would notice the difference, and the new approach is much simpler. As far as I’m aware, “dynamic order elaboration” is the best practice solution for first-class polymorphism in practice.
I have also developed an extension to Miller’s pattern unification algorithm. I haven’t yet made any formal publication about this; see this Stack Exchange answer for some explanation and links to more material.
Universe features
This is a small but fun piece which shows that fancy predicative universe features (polymorphism, first-class levels, transfinite levels, level bounds) can be modeled with induction-recursion: