I'm a little fuzzy on the compiler used in Allegro (and cmucl). I
assume they don't use a JIT compiler; rather, it is compiled and
loaded at dev-time, or interpreted at run-time.
If that's true, then does Allegro do any runtime optimizations? As a
dynamic language, a Lisp compiler could do a much better job if it did
type-aware specialization at runtime, if it had a better picture of
the type heirarchy, if it could propagate constants around, etc. Is
anything like this done in Allegro now?
By JIT, I don't specifically mean a Java JIT. A JVM must compile
mid-level bytecodes into machine code as fast as possible, which
doesn't leave much time for thorough optimizations. Lisp could
generate optimized low-level register machine bytecodes at dev-time,
leaving runtime optimizations and machine code generation for runtime.
Lots of similar ideas floating around: finite-state codegen, omniware,
self, cyclone, etc.
············@yahoo.com writes:
> I'm a little fuzzy on the compiler used in Allegro (and cmucl). I
> assume they don't use a JIT compiler; rather, it is compiled and
> loaded at dev-time, or interpreted at run-time.
If you do not call compile or compile file, you will be running
interpreted code. There is no automagic JIT. However, you can load
compiled code into the runtime, or if the runtime contains the
compiler, you can compile code on the fly within the runtime.
On Mon, Mar 01, 2004 at 09:37:15AM -0800, ············@yahoo.com wrote:
> I'm a little fuzzy on the compiler used in Allegro (and cmucl). I
> assume they don't use a JIT compiler; rather, it is compiled and
> loaded at dev-time, or interpreted at run-time.
There is nothing stopping a conforming Common Lisp program from invoking
the compiler at run-time on any code and subsequently loading and
running it. Some implementations provide an interpreter in addition to
the compiler, but there are some (SBCL, Corman, OpenMCL) which do not
have an interpreter and will compile code quickly and run it instead.
> If that's true, then does Allegro do any runtime optimizations? As a
> dynamic language, a Lisp compiler could do a much better job if it did
> type-aware specialization at runtime, if it had a better picture of
> the type heirarchy, if it could propagate constants around, etc. Is
> anything like this done in Allegro now?
I sure hope so. I can't say much about Allegro, but CMUCL does
intensive type-inferencing and makes use of the information to pick
efficient representations and storage classes. It does copy
propagation, constant folding, trace scheduling, register allocation,
etc.
> By JIT, I don't specifically mean a Java JIT. A JVM must compile
> mid-level bytecodes into machine code as fast as possible, which
> doesn't leave much time for thorough optimizations. Lisp could
> generate optimized low-level register machine bytecodes at dev-time,
> leaving runtime optimizations and machine code generation for runtime.
> Lots of similar ideas floating around: finite-state codegen, omniware,
> self, cyclone, etc.
You can generally tailor the aggressiveness of the compiler in various
areas by specifying certain optimization qualities either globally or
for indicated blocks of code. I agree, though, that run-time
recompilation of code with adjustments has some interesting
possibilities. Dynamic inlining of methods could be very fruitful for
certain programs (like with Self). I don't know of any CL compiler
which does this.
--
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
In article <······················@mapcar.org>,
Matthew Danish <·······@andrew.cmu.edu> wrote:
> > If that's true, then does Allegro do any runtime optimizations? As a
> > dynamic language, a Lisp compiler could do a much better job if it did
> > type-aware specialization at runtime, if it had a better picture of
> > the type heirarchy, if it could propagate constants around, etc. Is
> > anything like this done in Allegro now?
>
> I sure hope so. I can't say much about Allegro, but CMUCL does
> intensive type-inferencing and makes use of the information to pick
> efficient representations and storage classes. It does copy
> propagation, constant folding, trace scheduling, register allocation,
> etc.
I don't think it does this optimization dynamically at run time. It's
done based on static analysis of the code when it's first compiled.
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
Barry Margolin <······@alum.mit.edu> wrote in message news:<····························@comcast.ash.giganews.com>...
> In article <······················@mapcar.org>,
> Matthew Danish <·······@andrew.cmu.edu> wrote:
>
> > > If that's true, then does Allegro do any runtime optimizations? As a
> > > dynamic language, a Lisp compiler could do a much better job if it did
> > > type-aware specialization at runtime, if it had a better picture of
> > > the type heirarchy, if it could propagate constants around, etc. Is
> > > anything like this done in Allegro now?
> >
> > I sure hope so. I can't say much about Allegro, but CMUCL does
> > intensive type-inferencing and makes use of the information to pick
> > efficient representations and storage classes. It does copy
> > propagation, constant folding, trace scheduling, register allocation,
> > etc.
>
> I don't think it does this optimization dynamically at run time. It's
> done based on static analysis of the code when it's first compiled.
Thanks for the info. I also think these optimizations are done at
compile-time, not run-time. If the compiler (cmucl or allegro) has a
decent interface, one could call it dynamically. But this might still
involve file I/O, which is slow relative to JIT compile time. If the
compiler were separated nicely like FLINT (SML/NJ), one could do more
with it.
On Tue, Mar 02, 2004 at 08:56:19AM -0800, ············@yahoo.com wrote:
> Barry Margolin <······@alum.mit.edu> wrote in message news:<····························@comcast.ash.giganews.com>...
> > In article <······················@mapcar.org>,
> > Matthew Danish <·······@andrew.cmu.edu> wrote:
> >
> > > > If that's true, then does Allegro do any runtime optimizations? As a
> > > > dynamic language, a Lisp compiler could do a much better job if it did
> > > > type-aware specialization at runtime, if it had a better picture of
> > > > the type heirarchy, if it could propagate constants around, etc. Is
> > > > anything like this done in Allegro now?
> > >
> > > I sure hope so. I can't say much about Allegro, but CMUCL does
> > > intensive type-inferencing and makes use of the information to pick
> > > efficient representations and storage classes. It does copy
> > > propagation, constant folding, trace scheduling, register allocation,
> > > etc.
> >
> > I don't think it does this optimization dynamically at run time. It's
> > done based on static analysis of the code when it's first compiled.
First, yes, I misread his question.
> Thanks for the info. I also think these optimizations are done at
> compile-time, not run-time. If the compiler (cmucl or allegro) has a
> decent interface, one could call it dynamically. But this might still
> involve file I/O, which is slow relative to JIT compile time. If the
> compiler were separated nicely like FLINT (SML/NJ), one could do more
> with it.
Unlike SML which exists in a static world, the ANSI CL standard requires
an interface to the compiler which does not involve file I/O (unless
you're dealing with a compiler like GCL which uses GCC and temp files).
It's called COMPILE and it can be invoked at any time. In addition, I
am sure that you can fiddle with all sorts of fun compiler things at
run-time; I have done so in SBCL. One of the great parts about working
on most CL compilers is that you can dynamically modify many portions of
them directly from the top-level. Of course, if you screw up the
ability to compile a function, you may not be able to fix it without
restarting the lisp =)
I'm sure the SBCL and CMUCL people don't mind patches for interesting
optimizations, so long as they don't break other things.
--
; Matthew Danish <·······@andrew.cmu.edu>
; OpenPGP public key: C24B6010 on keyring.debian.org
; Signed or encrypted mail welcome.
; "There is no dark side of the moon really; matter of fact, it's all dark."
············@yahoo.com writes:
> Barry Margolin <······@alum.mit.edu> wrote in message news:<····························@comcast.ash.giganews.com>...
> > In article <······················@mapcar.org>,
> > Matthew Danish <·······@andrew.cmu.edu> wrote:
[ SNIP ]
> > > I sure hope so. I can't say much about Allegro, but CMUCL does
> > > intensive type-inferencing and makes use of the information to pick
> > > efficient representations and storage classes. It does copy
> > > propagation, constant folding, trace scheduling, register allocation,
> > > etc.
> >
> > I don't think it does this optimization dynamically at run time. It's
> > done based on static analysis of the code when it's first compiled.
>
> Thanks for the info. I also think these optimizations are done at
> compile-time, not run-time. If the compiler (cmucl or allegro) has a
> decent interface, one could call it dynamically. But this might still
> involve file I/O, which is slow relative to JIT compile time. If the
> compiler were separated nicely like FLINT (SML/NJ), one could do more
> with it.
CMU CL (and from what I can recall, Allegro CL) have the compiler
available at run-time, normally. Callable as COMPILE. There's also a
file-compiler that produces loadable compiled code (COMPILE-FILE). The
latter definitely involves I/O, the former shouldn't, apart from
paging and/or swapping (well, some compilation diagnostics may be
written to "the screen").
//Ingvar
--
When it doesn't work, it's because you did something wrong.
Try to do it the right way, instead.
Ingvar Mattsson wrote:
> ············@yahoo.com writes:
>
>
>>Barry Margolin <······@alum.mit.edu> wrote in message news:<····························@comcast.ash.giganews.com>...
>>
>>>In article <······················@mapcar.org>,
>>> Matthew Danish <·······@andrew.cmu.edu> wrote:
>
> [ SNIP ]
>
>>>>I sure hope so. I can't say much about Allegro, but CMUCL does
>>>>intensive type-inferencing and makes use of the information to pick
>>>>efficient representations and storage classes. It does copy
>>>>propagation, constant folding, trace scheduling, register allocation,
>>>>etc.
>>>
>>>I don't think it does this optimization dynamically at run time. It's
>>>done based on static analysis of the code when it's first compiled.
>>
>>Thanks for the info. I also think these optimizations are done at
>>compile-time, not run-time. If the compiler (cmucl or allegro) has a
>>decent interface, one could call it dynamically. But this might still
>>involve file I/O, which is slow relative to JIT compile time. If the
>>compiler were separated nicely like FLINT (SML/NJ), one could do more
>>with it.
>
>
> CMU CL (and from what I can recall, Allegro CL) have the compiler
> available at run-time, normally. Callable as COMPILE. There's also a
> file-compiler that produces loadable compiled code (COMPILE-FILE). The
> latter definitely involves I/O, the former shouldn't, apart from
> paging and/or swapping (well, some compilation diagnostics may be
> written to "the screen").
Actually COMPILE is ANSI. It is not merely a feature of one or two
implementations. (or three, or four, or five - if you understand my
drift) :)
Cheers
--
marco
Marco Antoniotti <·······@cs.nyu.edu> writes:
> Ingvar Mattsson wrote:
[ SNIP ]
> > CMU CL (and from what I can recall, Allegro CL) have the compiler
> > available at run-time, normally. Callable as COMPILE. There's also a
> > file-compiler that produces loadable compiled code (COMPILE-FILE). The
> > latter definitely involves I/O, the former shouldn't, apart from
> > paging and/or swapping (well, some compilation diagnostics may be
> > written to "the screen").
>
> Actually COMPILE is ANSI. It is not merely a feature of one or two
> implementations. (or three, or four, or five - if you understand my
> drift) :)
Well, no, but they are available as "Compiles to native code". As far
as I recall, COMPILE *is* allowed to do "minimal compilation"
(effectively just macro-expansion).
In theory, one could probably manage to lose COMPILE for a
"special-purpose written application core" too.
//Ingvar
--
Warning: Pregnancy can cause birth from females
In article <··············@gruk.tech.ensign.ftech.net>,
Ingvar Mattsson <······@cathouse.bofh.se> wrote:
> In theory, one could probably manage to lose COMPILE for a
> "special-purpose written application core" too.
Yes, many implementations have options to remove unnecessary components
from the runtime environment, to reduce the size of saved images (this
is sometimes called a "tree shaker"). The compiler is often one of the
first things removed, as many applications don't make any use of runtime
function definition.
--
Barry Margolin, ······@alum.mit.edu
Arlington, MA
*** PLEASE post questions in newsgroups, not directly to me ***
Ingvar Mattsson wrote:
> Marco Antoniotti <·······@cs.nyu.edu> writes:
>
>
>>Ingvar Mattsson wrote:
>
> [ SNIP ]
>
>>>CMU CL (and from what I can recall, Allegro CL) have the compiler
>>>available at run-time, normally. Callable as COMPILE. There's also a
>>>file-compiler that produces loadable compiled code (COMPILE-FILE). The
>>>latter definitely involves I/O, the former shouldn't, apart from
>>>paging and/or swapping (well, some compilation diagnostics may be
>>>written to "the screen").
>>
>>Actually COMPILE is ANSI. It is not merely a feature of one or two
>>implementations. (or three, or four, or five - if you understand my
>>drift) :)
>
>
> Well, no, but they are available as "Compiles to native code". As far
> as I recall, COMPILE *is* allowed to do "minimal compilation"
> (effectively just macro-expansion).
More or less true. But the point is that COMPILE is ANSI. Hence you
are guaranteed to have COMPILE always available to write code.
>
> In theory, one could probably manage to lose COMPILE for a
> "special-purpose written application core" too.
Yes, but then you could not claim to have a conforming implementation as
(mapcar #'compile '(foo bar baz))
would not run in your special-purpose written application core.
Cheers
--
Marco