From: ···@sef-pmax.slisp.cs.cmu.edu
Subject: Re: Why are intepreters so slow today
Date: 
Message-ID: <CoMCCM.1yu.3@cs.cmu.edu>
When Common Lisp imposed the requirement that the behavior of interpreted
code and compiled code had to be the same (except for execution speed), a
lot of implementations went over to a system whereby the interpreter was
really a fast incremental compiler (perhaps with some hairy optimizations
turned off) followed by a call to the compiled code.  Or they use the
compiler's front end processing.  Or they go to a lot of trouble to do all
the analysis to make sure that the interpreted code doesn't deviate from
the compiled code.  In any case, this is *much* slower than interpreters
that don't have to mimic a compiler or that can cut a few corners for
efficiency's sake.  That's OK, because it also eliminates much of the need
to run big things in interpreted form.  And it makes debugging much easier,
saving great heaps of microseconds over the lifetime of a project.

"Compile early and often!" is our motto.

-- Scott

===========================================================================
Scott E. Fahlman			Internet:  ····@cs.cmu.edu
Senior Research Scientist		Phone:     412 268-2575
School of Computer Science              Fax:       412 681-5739
Carnegie Mellon University		Latitude:  40:26:33 N
5000 Forbes Avenue			Longitude: 79:56:48 W
Pittsburgh, PA 15213
===========================================================================

From: Henry G. Baker
Subject: Re: Why are intepreters so slow today
Date: 
Message-ID: <hbakerConFnK.HtL@netcom.com>
In article <············@cs.cmu.edu> ···@sef-pmax.slisp.cs.cmu.edu writes:
>When Common Lisp imposed the requirement that the behavior of interpreted
>code and compiled code had to be the same (except for execution speed)...

A more cynical person might have said:

"When the baroque syntax of Common Lisp imposed the requirement that
interpreted code had to be parsed by a relatively sophisticated parser
(now a completely 'open', but slow, code-walker), and the speed of
interpretation was forced to a mere walk, implementations were forced
to abandon interpretation as a feasible alternative method of execution."

Aside from the silliness of Common Lisp's 'lambda lists', there is no
reason for a Common Lisp interpreter to be that slow on simple
function calls.  One simply arranges for #'(lambda ... ) to translate
quickly into a somewhat more efficient form.  Due to the complexity of
Common Lisp, such an interpreter would be most easily mechanically
generated, but even that isn't terribly difficult.

The major reason is that it isn't considered important, so no one cares.
From: Jeff Dalton
Subject: Re: Why are intepreters so slow today
Date: 
Message-ID: <CotpAz.A8F@cogsci.ed.ac.uk>
In article <················@netcom.com> ······@netcom.com (Henry G. Baker) writes:
>In article <············@cs.cmu.edu> ···@sef-pmax.slisp.cs.cmu.edu writes:
>>When Common Lisp imposed the requirement that the behavior of interpreted
>>code and compiled code had to be the same (except for execution speed)...
>
>A more cynical person might have said:
>
>"When the baroque syntax of Common Lisp imposed the requirement that
>interpreted code had to be parsed by a relatively sophisticated parser
>(now a completely 'open', but slow, code-walker), and the speed of
>interpretation was forced to a mere walk, implementations were forced
>to abandon interpretation as a feasible alternative method of execution."

What is this parsing problem?  Just lambda-lists?

>Aside from the silliness of Common Lisp's 'lambda lists', there is no
>reason for a Common Lisp interpreter to be that slow on simple
>function calls.  One simply arranges for #'(lambda ... ) to translate
>quickly into a somewhat more efficient form.  Due to the complexity of
>Common Lisp, such an interpreter would be most easily mechanically
>generated, but even that isn't terribly difficult.
>
>The major reason is that it isn't considered important, so no one cares.

I'm not convinced that the syntax of lambda-lists is so important.
Some Common Lisps, at least, did use a-lists for binding, as a
consequecne of lexical scoping, and that would make them slower
than typical pre-CL interpreters.

But I don't see why &OPTIONAL, &REST, etc would make things much
slower when they weren't used.  The interpreter would have to check
whether a parameter was one of the special names as it went about
binding the parameters, but I wouldn't think that would slow it to
a walk.)

It's also worth bearing in mind that pre-CL Lisps such as Franz Lisp
handled &OPTIONAL and &REST when macroexpanding DEFUN forms.  Underneath,
such functions used an LEXPR mechanism.  (For those who haven't
encountered this, in an LEXPR, the one parameter is bound to the
number of arguments and the value of argument i is obtained by
writing (arg i).)  I don't see why similar techniques couldn't
be used in Common Lisp.

-- jeff
From: Henry G. Baker
Subject: Re: Why are intepreters so slow today
Date: 
Message-ID: <hbakerCouqvA.H72@netcom.com>
In article <··········@cogsci.ed.ac.uk> ····@aiai.ed.ac.uk (Jeff Dalton) writes:
>What is this parsing problem?  Just lambda-lists?
>>Aside from the silliness of Common Lisp's 'lambda lists', there is no
>>reason for a Common Lisp interpreter to be that slow on simple
>>function calls.  One simply arranges for #'(lambda ... ) to translate
>>quickly into a somewhat more efficient form.  Due to the complexity of
>>Common Lisp, such an interpreter would be most easily mechanically
>>generated, but even that isn't terribly difficult.
>
>I'm not convinced that the syntax of lambda-lists is so important.
>Some Common Lisps, at least, did use a-lists for binding, as a
>consequecne of lexical scoping, and that would make them slower
>than typical pre-CL interpreters.
>
>But I don't see why &OPTIONAL, &REST, etc would make things much
>slower when they weren't used.  The interpreter would have to check
>whether a parameter was one of the special names as it went about
>binding the parameters, but I wouldn't think that would slow it to
>a walk.)

Well, if you haven't actually parsed this nonsense, you can't appreciate
what a crock it is.  Between the convoluted syntax of the lambda-lists
themselves, and the Fortran-like declaration of essential things like
'special' variables, Common Lisp lambda-expressions are death to efficient
interpretation.  The complexity of this stuff, plus the non-guarantee
that 'tails' of argument lists grabbed by &rest won't be side-effected,
means that compile-time partial evaluation is nearly impossible.  (Yes,
I know it is done, but by compilers that care more about speed than
correctness.)

Read my paper, and weep.

Baker, H.G.  "Pragmatic Parsing in Common Lisp".  ACM Lisp Pointers IV,2
(April-June 1991), 3-15.
From: Jeff Dalton
Subject: Re: Why are intepreters so slow today
Date: 
Message-ID: <CooD13.Cq9@cogsci.ed.ac.uk>
In article <············@cs.cmu.edu> ···@sef-pmax.slisp.cs.cmu.edu writes:
>
>When Common Lisp imposed the requirement that the behavior of interpreted
>code and compiled code had to be the same (except for execution speed), a
>lot of implementations went over to a system whereby the interpreter was
>really a fast incremental compiler (perhaps with some hairy optimizations
>turned off) followed by a call to the compiled code.  Or they use the
>compiler's front end processing.  Or they go to a lot of trouble to do all
>the analysis to make sure that the interpreted code doesn't deviate from
>the compiled code.  In any case, this is *much* slower than interpreters
>that don't have to mimic a compiler or that can cut a few corners for
>efficiency's sake.  

And if the interpreters didn't do some preprocessing, they tended
to evaluate variables by looking them up in an a-list, which is slower
than the shallow-binding interpreters of the pre-Common-Lisp era.

>That's OK, because it also eliminates much of the need
>to run big things in interpreted form.  And it makes debugging much easier,
>saving great heaps of microseconds over the lifetime of a project.
>
>"Compile early and often!" is our motto.

I find it amusing that some C++ programmers here complain about how
long it takes to rebuild after changes, etc, when I, using Common
Lisp, recompile and rebuild all the time and think nothing of it
even though, by using Lisp, I have far more ways to avoid rebuilding
then they do.

They also have to decide whether to compile with -g or not, etc,
while I've found one OPTIMIZE setting for Lucid CL that's fast
and gives me all the debugging information I normally need.

This is the reverse of the normal image of CL being slow and hard
to deal with, making you mess around with OPTIMIZE variations, etc,
while C and C++ are neat, quick, and simple.

-- jeff