|
|
- ---
- title: The Amulet Programming Language
- date: January 18, 2018
- ---
-
- As you might have noticed, I like designing and implementing programming
- languages. This is another of these projects. Amulet is a
- strictly-evaluated, statically typed impure roughly functional
- programming language with support for parametric data types and rank-1
- polymorphism _à la_ Hindley-Milner (but [no
- let-generalization](#letgen)), along with row-polymorphic records. While
- syntactically inspired by the ML family, it's a disservice to those
- languages to group Amulet with them, mostly because of the (present)
- lack of modules.
-
- Planned features (that I haven't even started working on, as of writing
- this post) include generalized algebraic data types, modules and modular
- implicits, a reworked type inference engine based on _OutsideIn(X)_[^4]
- to support the other features, and, perhaps most importantly, a back-end
- that's not a placeholder (i.e. something that generates either C or LLVM
- and can be compiled to a standalone executable).
-
- The compiler is still very much a work in progress, and is actively
- being improved in several ways: Rewriting the parser for efficiency
- concerns (see [Lexing and Parsing](#parser)), improving the quality of
- generated code by introducing more intermediate representations, and
- introducing several optimisations on the one intermediate language we
- _do_ have.
-
- ## The Technical Bits
-
- In this section, I'm going to describe the implementation of the
- compiler as it exists at the time of writing - warts and all.
- Unfortunately, we have a bit too much code for all of it to fit in this
- blag post, so I'm only going to include the horribly broken bits here,
- and leave the rest out. Of course, the compiler is open source, and is
- available on my [GitHub][2].
-
- ### Lexing and Parsing {#parser}
-
- To call what we have a _lexer_ is a bit of an overstatement: The
- `Parser.Lexer` module, which underpins the actual parser, contains only
- a handful of imports and some definitions for use with [Parsec's][3]
- [`Text.Parsec.Token`][4] module; Everything else is boilerplate, namely,
- declaring, at top-level, the functions generated by `makeTokenParser`.
-
- Our parser is then built on top of this infrastructure (and the other
- combinators provided by Parsec) in a monadic style. Despite having
- chosen to use strict `Text`s, many of the Parsec combinators return
- `Char`s, and using the Alternative type class' ability to repeat actions
- makes linked lists of these - the dreaded `String` type. Due to this,
- and other inefficiencies, the parser is ridiculously bad at memory
- management.
-
- However, it does have some cute hacks. For example, the pattern parser
- has to account for being used in the parsing of both `match`{.ml} and
- `fun`{.ml} - in the former, destructuring patterns may appear without
- parenthesis, but in the latter, they _must_ be properly parenthesised:
- since `fun`{.ml} may have multiple patterns, it would be ambiguous if
- `fun Foo x -> ...`{.ml} is destructuring a `Foo` or takes two arguments.
-
- Instead of duplicating the pattern parser, one for `match`{.ml}es and
- one for function arguments, we instead _parametrised_ the parser over
- needing parenthesis or not by adding a rank-2 polymorphic continuation
- argument.
-
- ```haskell
- patternP :: (forall a. Parser a -> Parser a) -> Parser Pattern'
- patternP cont = wildcard <|> {- some bits omitted -} try destructure where
- destructure = withPos . cont $ do
- ps <- constrName
- Destructure ps <$> optionMaybe (patternP id)
- ```
-
- When we're parsing a pattern `match`{.ml}-style, the continuation given
- is `id`, and when we're parsing an argument, the continuation is
- `parens`.
-
- For the aforementioned efficiency concerns, however, we've decided to
- scrap the Parsec-based parser and move to an Alex/Happy based solution,
- which is not only going to be more maintainable and more easily hackable
- in the future, but will also be more efficient overall. Of course, for
- a toy compiler such as this one, efficiency doesn't matter that much,
- but using _one and a half gigabytes_ to compile a 20-line file is really
- bad.
-
- ### Renaming {#renamer}
-
- To simplify scope handling in both the type checker and optimiser, after
- parsing, each variable is tagged with a globally unique integer that is
- enough to compare variables. This also lets us use more efficient data
- structures later in the compiler, such as `VarSet`, which stores only the
- integer identifier of a variable in a big-endian Patricia tree[^1].
-
- Our approach, described in _[Secrets of the Glasgow Haskell Compiler
- inliner][5]_ as "the Sledgehammer", consists of duplicating _every_
- bound variable to avoid name capture problems. However, while the first
- of the listed disadvantages surely does apply, by doing all of the
- _renaming_ in one go, we mostly avoid the latter. Of course, since then,
- the Haskell ecosystem has evolved significantly, and the plumbing
- required is a lot less intrusive.
-
- In our compiler, we use MTL-style classes instead of concrete monad
- transformer stacks. We also run every phase after parsing in a single
- `GenT`{.haskell} monad, which provides a fresh supply of integers for
- names. "Plumbing" the fresh name supply, then, only involves adding a
- `MonadGen Int m` constraint to the context of functions that need it.
-
- Since the string component of parsed names is not thrown away, we also
- have to make up strings themselves. This is where another cute hack
- comes in: We generate, lazily, an infinite stream of names that goes
- `["a" .. "z", "aa" .. "az", "ba" .. "bz", ..]`, then use the
- `MonadGen`{.haskell} counter as an index into that stream.
-
- ```haskell
- alpha :: [Text]
- alpha = map T.pack $ [1..] >>= flip replicateM ['a'..'z']
- ```
-
- ### Desugaring
-
- The desugarer is a very simple piece of code which, through use of _Scrap
- Your Boilerplate_-style generic programming, traverses the syntax tree
- and rewrites nodes representing syntax sugar to their more explicit
- versions.
-
- Currently, the desugarer only expands _sections_: That is, expressions
- of the form `(+ e)` become `fun x -> x + e` (where `e` is a fresh name),
- expressions like `(e +)` become `fun x -> e + x`, and expressions like
- `.foo` becomes `fun x -> x.foo`.
-
- This is the only component of the compiler that I can reasonably
- include, in its entirety, in this post.
-
- ```haskell
- desugarProgram = everywhereM (mkM defaults) where
- defaults :: Expr Parsed -> m (Expr Parsed)
- defaults (BothSection op an) = do
- (ap, ar) <- fresh an
- (bp, br) <- fresh an
- pure (Fun ap (Fun bp (BinOp ar op br an) an) an)
- defaults (LeftSection op vl an) = do
- (cap, ref) <- fresh an
- pure (Fun cap (BinOp ref op vl an) an)
- defaults (RightSection op vl an) = do
- (cap, ref) <- fresh an
- pure (Fun cap (BinOp vl op ref an) an)
- defaults (AccessSection key an) = do
- (cap, ref) <- fresh an
- pure (Fun cap (Access ref key an) an)
- defaults x = pure x
- ```
-
- ### Type Checking
-
- By far the most complicated stage of the compiler pipeline, our
- inference algorithm is modelled after Algorithm W (extended with kinds
- and kind inference), with constraint generation and solving being two
- separate steps.
-
- We first traverse the syntax tree, in order, making up constraints and
- fresh type variables as needed, then invoke a unification algorithm to
- produce a substitution, then apply that over both the generated type (a
- skeleton of the actual result) and the syntax tree (which is explicitly
- annotated with types everywhere).
-
- The type inference code also generates and inserts explicit type
- applications when instancing polymorphic types, since we internally
- lower Amulet into a System F core language with explicit type
- abstraction and application. We have `TypeApp` nodes in the syntax tree
- that never get parsed or renamed, and are generated by the type checker
- before lowering happens.
-
- Our constraint solver is quite rudimentary, but it does the job nicely.
- We operate with a State monad with the current substitution. When we
- unify a variable with another type, it is added to the current
- substitution. Everything else is just zipping the types together. When
- we try to unify, say, a function type with a constructor, that's an
- error. If a variable has already been added to the current substitution and
- encounter it again, the new type is unified with the previously recorded
- one.
-
- ```haskell
- unify :: Type Typed -> Type Typed -> SolveM ()
- unify (TyVar a) b = bind a b
- unify a (TyVar b) = bind b a
- unify (TyArr a b) (TyArr a' b') = unify a a' *> unify b b'
- unify (TyApp a b) (TyApp a' b') = unify a a' *> unify b b'
- unify ta@(TyCon a) tb@(TyCon b)
- | a == b = pure ()
- | otherwise = throwError (NotEqual ta tb)
- ```
-
- This is only an excerpt, because we have very complicated types.
-
- #### Polymorphic Records
-
- One of Amulet's selling points (if one could call it that) is its support
- for row-polymorphic records. We have two types of first-class record
- types: _closed_ record types (the type of literals) and _open_ record
- types (the type inferred by record patterns and field getters.). Open
- record types have the shape `{ 'p | x_n : t_n ... x_n : t_n }`{.ml},
- while closed records lack the type variable `'p`{.ml}.
-
- Unification of records has 3 cases, but in all 3 cases it is checked that
- fields present in both records have unifiable types.
-
- - When unifying an open record with a closed one, present in both
- records have unifiable types, and instance the type variable to contain
- the extra fields.
- - When unifying two closed records, they must have exactly the same
- shape and unifiable types for common fields.
- - When unifying two open record types, a new fresh type variable is
- created to use as the "hole" and tack the fields together.
-
- As an example, `{ x = 1 }` has type `{ x : int }`{.ml}, the function
- `fun x -> x.foo` has type `{ 'p | foo : 'a } -> 'a`{.ml}, and
- `(fun r -> r.x) { y = 2 }` is a type error[^2].
-
- #### No Let Generalisation {#letgen}
-
- Vytiniotis, Peyton Jones and Schrijvers argue[^5] that HM-style
- `let`{.ml} generalisation interacts badly with complex type system
- extensions such as GADTs and type families, and should therefore be
- omitted from such systems. In a deviation from the paper, GHC 7.2
- reintroduces `let`{.ml} generalisation for local definitions that meet
- some criteria[^3].
-
- > Here's the rule. With `-XMonoLocalBinds` (the default), a binding
- > without a type signature is **generalised only if all its free variables
- > are closed.**
- >
- > A binding is **closed** if and only if
- >
- > - It has a type signature, and the type signature has no free variables; or
- > - It has no type signature, and all its free variables are closed, and it
- is unaffected by the monomorphism restriction. And hence it is fully
- generalised.
-
- We, however, have chosen to follow that paper to a tee. Despite not
- (yet!) having any of those fancy type system features that interact
- poorly with let generalisation, we do not generalise _any_ local
- bindings.
-
-
- ### Lowering
-
- After type checking is done (and, conveniently, type applications have
- been left in the correct places for us by the type checker), Amulet code
- is converted into an explicitly-typed intermediate representation, in
- direct style, which is used for (local) program optimisation. The AST is
- simplified considerably: from 19 constructors to 9.
-
- Type inference is no longer needed: the representation of core is packed
- with all the information we need to check that programs are
- type-correct. This includes types in every binder (lambda abstractions,
- `let`{.ml}s, pattern bindings in `match`{.ml}), big-lambda abstractions
- around polymorphic values (a $\lambda$ binds a value, while a $\Lambda$
- binds a type), along with the already mentioned type applications.
-
- Here, code also gets the error branches for non-exhaustive `match`{.ml}
- expressions, and, as a general rule, gets a lot uglier.
-
- ```ocaml
- let main _ = (fun r -> r.x) { x = 2 }
-
- (* Is elaborated into *)
-
- let main : ∀ 'e. 'e -> int =
- Λe : *. λk : 'e. match k {
- (p : 'e) : 'e -> (λl : { 'g | x : int }. match l {
- (r : { 'g | x : int }) : { 'g | x : int } -> match r {
- { (n : { 'g | x : int }) | x = (m : int) } : { 'g | x : int } -> m
- };
- (o : { 'g | x : int }) : { 'g | x : int } ->
- error @int "<test>[1:15 .. 1:27]"
- }) ({ {} | x : int = 2 });
- (q : 'e) : 'e -> error @int "<test>[1:14 .. 1:38]"
- }
- ```
-
- ### Optimisation
-
- As the code we initially get from lowering is ugly and inefficient -
- along with being full of the abstractions functional programs have by
- nature, it is full of redundant matches created by e.g. the fact that
- functions can not do pattern matching directly, and that field access
- gets reduced to pattern matching - the optimiser's job is to make it
- prettier, and more efficient.
-
- The optimiser works by applying, in order, a series of local
- transformations operating on individual sub-terms to produce an efficient
- program, 25 times. The idea of applying them several times is that, when
- a simplification pass kicks in, more simplification opportunities might
- arise.
-
- #### `dropBranches`, `foldExpr`, `dropUselessLets`
-
- These trivial passes remove similarly trivial pieces of code that only
- add noise to the program. `dropBranches` will do its best to remove
- redundant arms from a `match`{.ml} expression, such as those that
- appear after an irrefutable pattern. `foldExpr` reduces uses of
- operators where both sides are known, e.g. `2 + 2` (replaced by the
- literal `5`) or `"foo " ^ "bar"` (replaced by the literal `"foo
- bar"`). `dropUselessLets` removes `let`{.ml}s that bind unused variables
- whose right-hand sides are pure expressions.
-
- #### `trivialPropag`, `constrPropag`
-
- The Amulet optimiser does inlining decisions in two (well, three)
- separate phases: One is called _propagation_, in which a `let` decides
- to propagate its bound values into the expression, and the other is the
- more traditional `inlining`, where variables get their values from the
- context.
-
- Propagation is by far the easiest of the two: The compiler can see both
- the definitions and all of the use sites, and could in theory decide if
- propagating is beneficial or not. Right now, we propagate all literals
- (and records made up solely of other trivial expressions), and do a
- round of propagation that is best described as a rule.
-
- ```ocaml
- let { v = C e } in ... v ...
- (* becomes *)
- let { v' = e } in ... C v' ...
- ```
-
- This _constructor propagation_ allows the `match`{.ml} optimisations to kick
- in more often, and is semantics preserving.
-
- #### `match`{.ml}-of-known-constructor
-
- This pass identifies `match`{.ml} expressions where we can statically
- determine the expression being analysed and, therefore, decide which
- branch is going to be taken.
-
- ```ocaml
- match C x with
- | C e -> ... e ...
- ...
- (* becomes *)
- ... x ...
- ```
-
- #### `match`{.ml}-of-bottom
-
- It is always safe to turn a `match`{.ml} where the term being matched is a
- diverging expression into only that diverging expression, thus reducing
- code size several times.
-
- ```ocaml
- match (error @int "message") with ...
- (* becomes *)
- error @int "message"
- ```
-
- As a special case, when one of the arms is itself a diverging
- expression, we use the type mentioned in that application to `error` to
- fix up the type of the value being scrutinized.
-
- ```ocaml
- match (error @foo "message") with
- | _ -> error @bar "message 2"
- ...
- (* becomes *)
- error @bar "message"
- ```
-
- #### `match`{.ml}-of-`match`{.ml}
-
- This transformation turns `match`{.ml} expressions where the expression
- being dissected is itself another `match`{.ml} "inside-out": we push the
- branches of the _outer_ `match`{.ml} "into" the _inner_ `match` (what
- used to be the expression being scrutinized). In doing so, sometimes,
- new opportunities for match-of-known-constructor arise, and the code
- ends up simpler.
-
- ```ocaml
- match (match x with
- | A -> B
- | C -> D) with
- | B -> e
- | D -> f
- (* becomes *)
- match x with
- | A -> match B with
- | B -> e
- | D -> f
- | C -> match D with
- | B -> e
- | D -> f
- ```
-
- A clear area of improvement here is extracting the outer branches into
- local `let`{.ml}-bound lambda abstractions to avoid an explosion in code
- size.
-
- #### `inlineVariable`, `betaReduce`
-
- In this pass, use of a variable is replaced with the definition of that
- variable, if it meets the following conditions:
-
- - The variable is a lambda abstraction; and
- - The lambda abstraction's body is not too _expensive_. Computing the
- cost of a term boils down to computing the depth of the tree
- representing that term, with some extra cost added to some specific
- types of expression.
-
- In doing this, however, we end up with pathological terms of the form
- `(fun x -> e) y`{.ml}. The `betaReduce` pass turns this into `let x = y in
- e`{.ml}. We generate `let`{.ml} bindings instead of substituting the
- variable with the parameter to maintain the same evaluation order and
- observable effects of the original code. This does mean that, often,
- propagation kicks in and gives rise to new simplification opportunities.
-
- ## Epilogue
-
- I was planning to write a section with a formalisation of the language's
- semantics and type system, but it turns out I'm no mathematician, no
- matter how hard I pretend. Maybe in the future.
-
- Our code generator is wholly uninteresting, and, most of all, a
- placeholder: This is why it is not described in detail (that is, at all)
- in this post. I plan to write a follow-up when we actually finish the
- native code generator.
-
- As previously mentioned, the compiler _is_ open source: the code is
- [here][2]. I recommend using the [Nix package manager][9] to acquire the
- Haskell dependencies, but Cabal should work too. Current work in
- rewriting the parser is happening in the `feature/alex-happy` branch.
-
- [^1]: This sounds fancy, but in practice, it boils down to using
- `Data.IntSet`{.haskell} instead of `Data.Set`{.haskell}.
-
- [^2]: As shown [here][6]. Yes, the error messages need improvement.
-
- [^3]: As explained in [this blog post][8].
-
- [^4]: Dimitrios Vytiniotis, Simon Peyton Jones, Tom Schrijvers,
- and Martin Sulzmann. 2011. [OutsideIn(X): Modular Type Inference With
- Local Assumptions][1]. _Note that, although the paper has been
- published in the Journal of Functional Programming, the version linked
- to here is a preprint._
-
- [^5]: Dimitrios Vytiniotis, Simon Peyton Jones, Tom Schrijvers. 2010.
- [Let Should not be Generalised][7].
-
- [1]: <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/jfp-outsidein.pdf>
- [2]: <https://github.com/zardyh/amulet/tree/66a4143af32c3e261af51b74f975fc48c0155dc8>
- [3]: <https://hackage.haskell.org/package/parsec-3.1.11>
- [4]: <https://hackage.haskell.org/package/parsec-3.1.11/docs/Text-Parsec-Token.html>
- [5]: <https://www.microsoft.com/en-us/research/wp-content/uploads/2002/07/inline.pdf>
- [6]: </snip/sel.b0e94.txt>
- [7]: <https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/tldi10-vytiniotis.pdf>
- [8]: <https://ghc.haskell.org/trac/ghc/blog/LetGeneralisationInGhc7>
- [9]: <https://nixos.org/nix/>
|