I just wonder if this is possible with C++/clr.
I'm trying to .NET-enable an interpreter for a Lisp flavoured
language, the interpreter written in NASM which I have
managed to port to MASM. However the VS2005 debugger
is giving me a really hard time - seems it doesn't cope well
with mixing MASM with C++/clr, so I am considering a
rewrite of the interpreter kernel in C++/clr.
The control flow of the language doesn't really map into
..net method calls, so I will need to implement it otherwise.
I'd prefer to do it by represent the execution state as a chain
of C++ objects that is operated on by virtual methods, so
that each state transition is represented by a virtual method
call, which calls the next etc. in a tail recursive manner.
This is how the assembler program works, except that
the objects are stacked on the processor stack rather than
chained - and it works nice.
To use it in a C++ implementation, I need tail optimized
virtual calls. When performing a complex computation,
the interpreter will do millions of chained calls before
it finally return to the event handler that started the
computation.
I know the IL has provisions for tail call optimization´
but the docs I've seen didn't state clearly how well this
works - it just states that the jit will sometimes do it.
I also don't know if the C++/clr compiler will emit the
tail-optimized call instructions (the C# compiler doesn't
but C# is unsuitable for this task anyway).
When asked about this, C++ or C# programmers tend
to argue that relying on tail call optimization is bad
style, which may be true for most tasks, but a threaded
interpreter would really gain from being able to use it.
So, does anyone know how the compiler and jit
handles this? Is it possible to set up a set of premises
and tell that the tail call optimization works on these
premises?