Simulate the behaviour of the stack machine, mark where Symbol values
are created, and track where they are in the stack as we emit instructions.
This will allow us to be smarter with setting and getting locals.
However that's not actually implemented yet! At the moment,
we're still generating the same code as before but in a fancier way.
What's happening:
Let's say we have a function call that takes two arguments [A,B]
and that we are lucky and just happen to have a VM stack of [A,B]
That's great, we _should_ not have to do anything, just emit 'Call'
BUT what we are doing is "load A to the top" and then "load B to the top".
This sounds good but it's actually stupid
We are saying we want A at the top of the stack when we don't!!
We want it right where it is, just beneath the top!
So we are emitting code to bring it from 2nd position to the top.
How do we do that? By inserting a local.set instruction to save it,
and a local.get instruction to load it to the top!
What should be happening:
Check the entire VM stack at once for all the symbols we are trying to load.
If they happen to be there (which is likely given the IR structure) do nothing.
If it's not quite perfect... then emit lots of local.set and local.get. Whatever.
What would be the way to solve this properly?
Build the Wasm instructions as a tree, then serialise it depth-first.
The tree encodes all of the information about what needs to be on the
stack at what point in the program. If a tree node has two children,
it consumes two items from the stack. If you do it this way, all
of the instructions get executed just at the right time.