I ended last time with a lisp that had the bare minimum of features and had a reached an acceptable speed. Now it’s time to make Mumbler a more useful languages with a couple of new features: arbitrary precision integers and—what no lisp should be without—tail call optimization.
I don’t want to undo all the work it took to make Mumbler fast, so I’m going to show how Truffle can help to include these features and still keep the langauge fast.
After the last post, we have a working interpreter in Truffle (yay!), but the results weren’t very exciting. Running our fibonacci benchmark we got a paltry 6.3 seconds execution time using TruffleMumbler. Perhaps we can do better.
With help from a couple of Truffle veterans, I was able to speed up my interpreter’s speed. With a couple of key improvements and warming up the VM I was able to get the execution time down to 0.1 seconds. A 63x jump!
Let’s go through the changes I made to get such an improvement.
I was hoping to get this next installment out earlier. Sorry for the delay. Now after Thanksgiving, a vacation and a bout of the flu I’m ready to get back into it.
Last time I created a simple interpreter for a lisp language I called Mumbler. This time around let’s actually use Truffle and Graal to run our interpreter. We’ll start off with the minimal amount of Truffle we need to get our interpreter to compile and run. If the bare-bones interpreter isn’t fast enough we’ll investigate more Truffle hooks to speed things up.
How hard is it to write a simple, fast interpreter? Let's find out.
It’s fun writing little language interpreters in Python. You can get a fully functional interpreter in about an hour but of course my toy interpreters are just that: a toy. Writing a lisp interpreter on top of an already slow language like Python will not win any speed competitions. You may get away with writing small Domain Specific Languages (DSLs) as an interpreter, but you can forget about any general programming language. The performance hit makes it untenable unless you write it in some lower level language like C; who wants to do that? If you want to target higher level virtual machines like the JVM you’re left with writing a compiler that takes your code and produces JVM bytecode. How about writing a compiler that targets Javascript? Another not-so-fun alternative.
Thankfully, a new solution is here. You can write your interpreter in a VM that is designed to optimize your interpreter with all that wonderful JIT compilation magic. Oracle labs has released its own VM that hopes to make writing language interpreters both easy and fast. It can also leverage the huge ecosystem of the Java Virtual Machine (JVM). This modified JVM contains a new Just-In-Time (JIT) compiler that can speed up interpreters like my little lisp to near-Java speeds. The new JIT compiler is called Graal. To take advantage of Graal’s JIT-y goodness you use the Truffle library to annotate your interpreter and give Graal some hints on invariants and type information. For this integration effort you get significant speedups in your interpreter without having to resort to writing a bytecode compiler plus you have the full power of Java at your disposal.