From: Andre' Marien
Subject: Inline expansion versus threaded code compiling Prolog
Date: 
Message-ID: <3772@ima.ima.isc.com>
only partial answers: can you help us, either with your personal experience or
with pointers to literature ?


1. Inline expansion versus threaded code

Inline expansion produces larger executables than threaded code, so one could
think that while executing a threaded code program, there is more locality and
thus less paging. This could lead - especially for LARGE programs - to
performance degradation for inline expanded code, at least a degradation which
is faster than with threaded code. Our experience is quite the contrary:
threaded code degrades more sudden and faster than inline code. Of course,
threaded code has a fixed overhead in executing the thread - jmp ··@+ - but
perhaps it is more important that threaded code has to fetch the operands for
the instruction, which destroys the locality. Another point is that the fixed
code in a threaded code implementation can be quite small - 64Kb for instance -
so that this code is probably always in real memory; and by tuning the place of
the pieces of code, one can achieve that certain benchmarks run completely in
the instruction cache, explaining why for small programs, threaded code
performs quite well.
We know that certain compilers for pc's use threaded code, because space
still is a problem if you have just a few mega and no virtual memory.
We wonder whether there are production quality compilers exist for larger
computers using threaded code. It may be true that for low-end SUN's
threaded code may decrease the speed gap, but for the larger systems this
would surprise us.

2. Mixing interpreted and compiled code

In a lot of Prolog systems, Prolog code is divided into static code and dynamic
code: static code cannot be changed at runtime, dynamic code can. Very often,
dynamic code is interpreted - by different techniques - and static code is
compiled. There are several ways to implement how compiled code calls
interpreted code, or vice versa. Obviously, most Prolog implementors have
solved this, perhaps to their satisfaction. We are curious to know how others
did this and we would also like to know about literature about this - in the
context of Prolog, or Lisp or any other language.

If you answer to us directly, we'll post a summary to the net later.

Thanks for your help

·······@cs.kuleuven.ac.be       (mcvax!prlb2!kulcs!bimbart)
·····@sunbim                    (mcvax!prlb2!sunbim!andre for europe)
                                (sun!sunuk!sunbim!andre for US)
[The standard way to mix compiled and interpreted code in Lisp is to have a
"call routine" procedure that both the interpreter and compiled code use to
do all their calls.  It looks up the routine in the symbol table (if there
isn't already a pointer to it handy) and either calls the code or goes into
the interpreter.  There are a variety of compile- and run-time hacks to let
compiled routines call each other without going through the lookup.  I suspect
the manuals for various Lisp systems could be informative here.  -John]
--
Send compilers articles to ima!compilers or, in a pinch, to ······@YALE.EDU
Plausible paths are { decvax | harvard | yale | bbn}!ima
Please send responses to the originator of the message -- I cannot forward
mail accidentally sent back to compilers.  Meta-mail to ima!compilers-request