I'm working on an intermediate language and a virtual machine to run a functional language with a couple of "problematic" properties:
- Lexical namespaces (closures)
- Dynamically growing call stack
- A slow integer type (bignums)
The intermediate language is stack based, with a simple hash-table for the current namespace. Just so you get an idea of what it looks like, here's the McCarthy91 function:
# McCarthy 91: M(n) = n - 10 if n > 100 else M(M(n + 11))
.sub M
args
sto n
rcl n
float 100
gt
.if
.sub
rcl n
float 10
sub
.end
.sub
rcl n
float 11
add
list 1
rcl M
call-fast
list 1
rcl M
tail
.end
call-fast
.end
The "big loop" is straightforward:
- fetch an instruction
- increment the instruction pointer (or program counter)
- evaluate the instruction
Along with sto
, rcl
and a whole lot more, there are three instructions for function calls:
call
copies the namespace (deep copy) and pushes the instruction pointer onto the call stackcall-fast
is the same, but only creates a shallow copytail
is basically a 'goto'
The implementation is really straightforward. To give you a better idea, here's just a random snippet from the middle of the "big loop" (updated, see below)
} else if inst == 2 /* STO */ {
local[data] = stack[len(stack) - 1]
if code[ip + 1][0] != 3 {
stack = stack[:len(stack) - 1]
} else {
ip++
}
} else if inst == 3 /* RCL */ {
stack = append(stack, local[data])
} else if inst == 12 /* .END */ {
outer = outer[:len(outer) - 1]
ip = calls[len(calls) - 1]
calls = calls[:len(calls) - 1]
} else if inst == 20 /* CALL */ {
calls = append(calls, ip)
cp := make(Local, len(local))
copy(cp, local)
outer = append(outer, &cp)
x := stack[len(stack) - 1]
stack = stack[:len(stack) - 1]
ip = x.(int)
} else if inst == 21 /* TAIL */ {
x := stack[len(stack) - 1]
stack = stack[:len(stack) - 1]
ip = x.(int)
The problem is this: Calling McCarthy91 16 times with a value of -10000 takes, near as makes no difference, 3 seconds (after optimizing away the deep-copy, which adds nearly a second).
My question is: What are some common techniques for optimizing interpretation of this kind of language? Is there any low-hanging fruit?
I used slices for my lists (arguments, the various stacks, slice of maps for the namespaces, ...), so I do this sort of thing all over the place: call_stack[:len(call_stack) - 1]
. Right now, I really don't have a clue what pieces of code make this program slow. Any tips will be appreciated, though I'm primarily looking for general optimization strategies.
Aside:
I can reduce execution time quite a bit by circumventing my calling conventions. The list <n>
instruction fetches n arguments of the stack and pushes a list of them back onto the stack, the args
instruction pops off that list and pushes each item back onto the stack. This is firstly to check that functions are called with the correct number of arguments and secondly to be able to call functions with variable argument-lists (i.e. (defun f x:xs)
). Removing that, and also adding an instruction sto* <x>
, which replaces sto <x>; rcl <x>
, I can get it down to 2 seconds. Still not brilliant, and I have to have this list
/args
business anyway. :)
Another aside (this is a long question I know, sorry):
Profiling the program with pprof told me very little (I'm new to Go in case that's not obvious) :-). These are the top 3 items as reported by pprof:
16 6.1% 6.1% 16 6.1% sweep pkg/runtime/mgc0.c:745
9 3.4% 9.5% 9 3.4% fmt.(*fmt).fmt_qc pkg/fmt/format.go:323
4 1.5% 13.0% 4 1.5% fmt.(*fmt).integer pkg/fmt/format.go:248
These are the changes I've made so far:
- I removed the hash table. Instead I'm now passing just pointers to arrays, and I only efficiently copy the local scope when I have to.
- I replaced the instruction names with integer opcodes. Before, I've wasted quite a bit of time comparing strings.
- The
call-fast
instruction is gone (the speedup wasn't measurable anymore after the other changes) - Instead of having "int", "float" and "str" instructions, I just have one
eval
and I evaluate the constants at compile time (compilation of the bytecode that is). Theneval
just pushes a reference to them. - After changing the semantics of
.if
, I could get rid of these pseudo-functions. it's now.if
,.else
and.endif
, with implicit gotos ànd block-semantics similar to.sub
. (some example code)
After implementing the lexer, parser, and bytecode compiler, the speed went down a little bit, but not terribly so. Calculating MC(-10000) 16 times makes it evaluate 4.2 million bytecode instructions in 1.2 seconds. Here's a sample of the code it generates (from this).