[isabelle-dev] Experimental support for arm64-linux

Florian Märkl isabelle-dev at florianmaerkl.de
Wed Nov 4 18:29:43 CET 2020


I am planning to write my Master's thesis at TUM in the coming summer
semester and I wonder whether implementing this backend for Poly/ML
would be a fitting topic for this.

I have recently bought myself a PineBook Pro, which is a small aarch64
laptop, maybe slightly faster than a Raspberry Pi 4, and I have actually
been using this as my main laptop for a few months now.
I use it mostly for programming C and Haskell, but could also play
around with Isabelle a bit and can confirm that the only blocker there
seems to be that it is just unbearably slow, so I do have some
motivation to fix this.

About GHC's llvm backend, I can also confirm that it indeed works very
well, with the main issue being the fact that llvm is generally quite
big and resource-hungry.
In fact, while compiling GHC itself and many other Haskell libraries
(which I had to do all by myself because Arch Linux ARM, which I use,
does not provide any Haskell packages), I ran out of memory multiple
times, had to reduce concurrency and eventually set up a Raspberry Pi 4
with 8GB of RAM and a lot of swap space as an external build server for
this. For reference, a build of GHC takes about one full night.
This blog post also mentions that it would still be desirable to have a
native arm backend in GHC:

So I think this is an important aspect, especially considering that
Isabelle compiles code on the fly while using it. However I don't know
how much ML code is actually being compiled during a usual Isabelle
session, so I can't say how strongly this will affect the experience in
the end.

On the other hand, there are of course the obvious benefits of llvm,
including that it already has a huge set of well-tested optimization
passes, and even if these optimizations won't fit as well to ML as they
do to classic imperative languages, in the worst case, it would be
degraded to a simple code generator for arbitrary architectures and some
language-specific optimization could be done before translating to LLVM IR.

It would be nice if someone from Prof. Nipkow's or related chairs could
give me some feedback whether this could theoretically fit for a
Master's thesis.


On 11/4/20 3:52 PM, Lukas Stevens wrote:
> Hi,
> I was revisiting this thread about the arm64 backend for Poly/ML.
> Unfortunately, I don't have any ideas about the funding but more of a
> technical remark. When implementing a new backend, I think one should
> think about using LLVM due to the following reasons:
> - It supports amd64 and arm64 (and more).
> - It comes with useful tooling including a debugger and a profiler.
> On the other hand, I am not very well versed about the costs and
> benefits of using LLVM as a backend for a functional language. Haskell
> has an LLVM backend [1]  that seems to work quite well [2] which
> confirms that it can work.
> Kind regards,
> Lukas
> [1]
> https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/backends/llvm
> [2]
> https://andreaspk.github.io/posts/2019-08-25-Opinion%20piece%20on%20GHC%20backends.html
> _______________________________________________
> isabelle-dev mailing list
> isabelle-dev at in.tum.de
> https://mailman46.in.tum.de/mailman/listinfo/isabelle-dev

More information about the isabelle-dev mailing list