`opencompl/lean-mlir`

.
While working on the project, I did a bunch of useful things:
- I helped write the Lean metaprogramming.book , an open-source book on using Lean's powerful metaprogramming facilities.
- I read through Mario Carneiro's thesis, models of dependently typed programming languages which explains Lean's metatheory and a proof of consistency of Lean. I quite liked this thesis, since it provides a very readable set-theoretic semantics for Lean!

- The CMake manual , which actually makes CMake sensible. I quite like CMake now, since I actually understand its semantics.
- Linkers and Loaders , to learn a ton of the arcane details of how precisely loading works.
- The GNU binutils , which were a lifesaver in debugging weird linker visibility issues.

- I was reading Categorical logic by Bart Jacobs, which describes how to reason about type theory using the machinery of fibered categories .
- I attended Adjoint School , a summer school for applied category theorists, which was a blast. I learnt a lot from it, and read a bunch of papers from it!
- I loved the paper Opinion Dynamics on Discourse Sheaves , which describes how to setup a 'discourse sheaf', a sheaf structure on a graph which models private versus public opinions. The punchline is that harmonic functions lead to 'harmonious' discourse amongst participants in the model!
- Ohad pointed me to Probabilistic models via quasi borel spaces , which builds quasi-borel spaces, which is a closed cartesian category where one can interpret simply typed lambda calculus. This gives a nice denotational footing to probabilistic programming.

- The Compactness theorem , which points to the finiteness of the proof system we use, and how this impacts the logic itself.
- The Lowenheim Skolem theorem, which shows that first order logic cannot control the size of its models.

- I began by reading Jech: Axiom of Choice , which was far too terse to grok, so I switched to reading the next lecture notes.
- Independence of CH: an intuitive explanation was a readable account of the machinery of forcing! I wanted to get the account of forcing from the point of view of topi, for which I started reading the next book.
- Sheaves in geometry and logic is a textbook on topos theory, which provide an account of forcing by building an object called a cohen topos . I didn't manage to get through enough of the book to really understand what the chapter on the cohen topos was doing, but I did get the vague idea. We seem to build a topos, and then use the internal logic of the topos to mimic the model we are building. The machinery of topos theory allows us to easily control the internal logic, thereby adding the axioms to ZF.

`frex`

, which stands for
"free extensions". The TL;DR is that they wish to study how to write down simplifiers / computer algebra systems
in a principled fashion. Their paper Partially static data as free extensions of algebras is a super readable account of their ideas. I quite enjoyed re-implementing
the basic version in Haskell. I wish to implement their more recent, more complex dependently-typed version of the
theory in Lean.
- On Universes in Type Theory describes the difference between russel and tarski style universes.
- Case trees are a data structure which are used in Coq and Idris to manage dependent pattern matching.
- primitive recursors in Lean
- congruence closure in intensional type theories describes how to extend the naive congruence closure algorithm in the presence of definitional equalities between types.
- Difference between match+fix, as introduced by Theirrey Coquand in 'Pattern Matching with Dependent Types' ,
`termination_by`

, and primitive recursion. - The idea of full abstraction, which asks the question of when operational and denotational semantics agree, first studied by Gordon Plotkin for PCF
- Andrej Nauer's notes on realizability , which very cleanly describes the theory of realisability models, where one studies mathematical objects equipped with computational structure. this naturally segues into discussions of models of computation and so forth.
- I am not a number, I am a free variable describes "locally nameless", a technique to manage names when implementing proof assistants.
- Higher order unification is necessary when implementing the elaborator for a proof assistant. Unfortunately, the full problem is also undecidable
- Luckily for us, Miller found a fragment called 'pattern unification' , where unification is indeed decidable. The key idea is to add a 'linearity' constraint which ensures that variables are not repeated into the pattern match, which makes even higher-order patterns decidable.
- The Beluga Proof Assistant is a proof assistant whose logic allows one to reify contexts. This means that one can write shallow embeddings of programming languages, have all the nice power of the proof assistant, while still reasoning about binders! I found the way in which Beluga makes such invasive changes to the metatheory in order to allow reasoning about binders to be very enlightening.
- The HOL light proof assistant and Isabelle/HOL are both based on higher order logic, and alternate, untyped foundations for proof assistants. I feel it's important for me to know the various ideas folks have tried in building proof assistants, and I was glad to have been pointed to Isabelle and HOL. I want to spend some time this year (2023) to learn Isabelle and HOL well enough that I can prove something like strong normalization of STLC in them.
- Fitch style modal logics are a systematic way to build type theories with modalities in them. These are typically used to create type theories that can reason about resources, such as concurrent access or security properties. The paper provides a unified account of how to build such type theories, and how to prove the usual slew of results about them.
- Minimal implementation of separation logic: Separation Logic for Sequential Programs explains how to write a small separation logic framework embedded in a dependently typed progrmaming language. I translated the original from Coq to Lean , and the whole thing clocks in at around 2000 LoC, which is not too shabby to bootstrap a full 21st century theory of reasoning about parallelism!
- Telescopic Mappings in Typed Lambda Calculus builds the theory of telescopes, which is the basic notation that's used when describing binders in dependent type theory. I had no idea that this had to be developed; I shudder to think how ugly notation was before this paper! I can't help but feel that this paper did for dependent type theory what einstein summation convention did for tensor calculus: provide compact notation for the uninteresting bits to allow us to to talk about the interesting bits well.
- When talking to Leonardo, I learnt that the hardest part of implementing a homotopical theorem prover was the elaboration of pattern matching. Pattern matching without K explains how to do this, and also made clear for me at what step UIP is used during pattern matching --- when refining on indexes of a type.
- The garden of forking paths to reason about lazy programs describes how to use Clairvoyant call by value to reason about laziness in a convenient fashion.

- A practical model for computing with matrix groups describes algorithms that form the crux of computational matrix group theory.
- A data structure for a uniform approach to computations with finite groups provides a data structure that unifies algorithms for the two ways of representing groups computationally: (1) as subgroups of the symmetric group, which is given as a strong generating set , and (2) as matrices. These two approaches are radically different under the hood, but the paper provides a unified API to deal with both.
- Computing in the monster describes how to perform computations in the monster group, a group that's so large that naively trying to write down elements would take two gigabytes of memory.

- The DPLL algorithm for solving SAT
- The CDCL strategy for refining SMT queries
- The Nelson Oppen algorithm for mixing convex theories
- The First Order Resolution ) rule, which exploits refutation-completeness to build a SAT solver,
- The Superposition Calculus for fast SAT solving, based on an extension of resolution.

- I mostly learnt lisp by reading practical common lisp and hanging out on
`##lisp`

on libera IRC. - I absolutely
*loved*the REPL driven development enabled by`emacs`

+`slime`

, and this has definitely set a gold standard for how programming language interaction ought to feel like. - I was floored by some of the comon lisp projects I saw, such as cepl , the code-eval-play loop for rapid shader development! His videos are super cool , and I highly recommend them to anyone who wants to get a flavour of LISP.
- I'd like to work through Let over Lambda , a book that explains all sorts of macro shenanigans!

- All Clear by Connie Willis paints a great picture of the blitz during WW2. It feels surreal to have visited london and edinburgh and glasgow and all the other places that are name dropped in the book, it felt visceral.
- The Annihilation series ), which has the creepiest vibes in a book I've ever read.
- Accelerando , which had a really neat take on a resolution of the Fermi Paradox, and just an overall fun tone.
- The book of all skies by Greg egan, which as usual explores a neat universe, this time with some kind of monodromy.
- Honorary mention to Perfect State by Brandon Sanderson , a cute novella with a really neat twist at the end I didn't see coming.

`:)`

. With that said,
I'd like to set concrete goals for 2023:
- Learn enough QFT to know what the hell renormalization is.
- Learn enough QED to be able to explain what a feynmann path integral is.
- Get good enough at juggling to be able to juggle three balls consistenty.
- Write my own proof kernel for Lean.
- Implement and write the paper about elaboration of mutual inductives for Lean, and start thinking about coinductives.
- Continue working on Lean's LLVM backend, and make Lean the fastest functional programming language in the block.
- Learn to cook a proper three course meal consistently.
- Get good enough at djembe to play kuku consistently.
- Get good enough at the guitar to strum Snow and Can't Stop well enough.
- Build a routine for shuffle dancing, so that I can dance consistently to a song.
- Learn to rock climb well enough that I can do V4's consistently.