Over the past few months (years?) I've posted on my blog about the various toy interpreters I've written.
I've used a couple of scripting languages/engines in my professional career, but in public I think I've implemented
- (Basically the same as the bytecode VM of Monkey).
Each of these works in similar ways, and each of these filled a minor niche, or helped me learn something new. But of course there's always a question:
- Which is fastest?
In the real world? It just doesn't matter. For me. But I was curious, so I hacked up a simple benchmark of calculating 12! (i.e. The factorial of 12).
The specific timings will vary based on the system which runs the test(s), but there's no threading involved so the relative performance is probably comparable.
Anyway the benchmark is simple, and I did it "fairly". By that I mean that I didn't try to optimize any particular test-implementation, I just wrote it in a way that felt natural.
The results? Evalfilter wins, because it compiles the program into bytecode, which can be executed pretty quickly. But I was actually shocked ("I wrote a benchmark; The results will blow your mind!") at the second and third result:
BenchmarkEvalFilterFactorial-4 61542 17458 ns/op BenchmarkFothFactorial-4 44751 26275 ns/op BenchmarkBASICFactorial-4 36735 32090 ns/op BenchmarkMonkeyFactorial-4 14446 85061 ns/op BenchmarkYALFactorial-4 2607 456757 ns/op BenchmarkTCLFactorial-4 292 4085301 ns/op
here we see that FOTH, my FORTH implementation, comes second. I guess this is an efficient interpreter too, bacause that too is essentially "bytecode". (Looking up words in a dictionary, which really maps to indexes to other words. The stack operations are reasonably simple and fast too.)
Number three? BASIC? I expected better from the other implementations to be honest. BASIC doesn't even use an AST (in my implementation), just walks tokens. I figured the TCO implemented by my lisp would make that number three.
Anyway the numbers mean nothing. Really. But still interesting.