From: Josef Eschgfaeller
Subject: Name of function
Date: 
Message-ID: <Pine.LNX.4.04.9906290104560.13814-100000@arbzi.zuhause.fe>
(1) How can I find the name of a variable function? As in

       (defun table (f)
       (format t "~a~%-----------------~%" (namestring-of-function f))
       (write-values-of f))
   
(2) Consider

       (defun num-list (a b &optional (d 1))
       (do ((x a (+ x d)) (li nil))
       ((> x b) (nreverse li)) (push x li)))
   
       (defun num-vector (a b &optional (d 1))
       (do* ((n (+ 1 (floor (- b a) d))) (v (make-array n))
       (k 0 (+ 1 k)) (i a (+ i d)))
       ((> i b) v) (setf (svref v k) i)))
   
    A little surprisingly, when not compiled, num-vector is slower
    than num-list. It becomes superior after compilation.
    Somewhat vaguely, can one say that before compilation
    only the number of operations counts, after it it becomes
    also important how the program runs through memory?

(3) It seems that CLISP can only compile to byte-code (with an
    improvement factor of around 10 on my Linux). Can other systems
    compile to machine code and what about the difference in speed?

J. Eschgfaeller

From: Barry Margolin
Subject: Re: Name of function
Date: 
Message-ID: <76Td3.990$KM3.236584@burlma1-snr2>
In article <········································@arbzi.zuhause.fe>,
Josef Eschgfaeller  <···@felix.unife.it> wrote:
>
>(1) How can I find the name of a variable function? As in
>
>       (defun table (f)
>       (format t "~a~%-----------------~%" (namestring-of-function f))
>       (write-values-of f))

There's no standard way to get this.  However, in most implementations the
printed representation of a function object will include its name, so
doing:

        (format t "~A~%-----------------~%" f)

will display the name.

The other thing you could do is require the callers to pass the symbol
rather than the function.  All the functions that accept function arguments
also accept symbols and will automatically call SYMBOL-FUNCTION when
necessary.  This won't work for local functions, though.

>(2) Consider
>
>       (defun num-list (a b &optional (d 1))
>       (do ((x a (+ x d)) (li nil))
>       ((> x b) (nreverse li)) (push x li)))
>   
>       (defun num-vector (a b &optional (d 1))
>       (do* ((n (+ 1 (floor (- b a) d))) (v (make-array n))
>       (k 0 (+ 1 k)) (i a (+ i d)))
>       ((> i b) v) (setf (svref v k) i)))

Are you *trying* to make life difficult for the people who are volunteering
assistance, by completely messing up the formatting of your functions?

>    A little surprisingly, when not compiled, num-vector is slower
>    than num-list. It becomes superior after compilation.
>    Somewhat vaguely, can one say that before compilation
>    only the number of operations counts, after it it becomes
>    also important how the program runs through memory?

Before compilation, there's more interpreter overhead, and the interpreter
tends to produce its own garbage, and this may shadow the cost of your
algorithm.

>(3) It seems that CLISP can only compile to byte-code (with an
>    improvement factor of around 10 on my Linux). Can other systems
>    compile to machine code and what about the difference in speed?

CMUCL and most commercial CL's compile to machine code, and the difference
is probably another order of magnitude or so.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.
From: ·····@corman.net
Subject: Re: Name of function
Date: 
Message-ID: <7lb3kh$nsl$1@nnrp1.deja.com>
In article <····················@burlma1-snr2>,
  Barry Margolin <······@bbnplanet.com> wrote:
> In article <Pine.LNX.4.04.9906290104560.13814-
······@arbzi.zuhause.fe>,
> Josef Eschgfaeller  <···@felix.unife.it> wrote:
> >
> >(1) How can I find the name of a variable function? As in
> >
> >       (defun table (f)
> >       (format t "~a~%-----------------~%" (namestring-of-function
f))
> >       (write-values-of f))
>
> There's no standard way to get this.  However, in most

There is a standard way. It is the third returned value of
FUNCTION-LAMBDA-EXPRESSION.

The implementation is not required to support this, but if the
name is available this is a standard way to retrieve it.

Roger Corman



Sent via Deja.com http://www.deja.com/
Share what you know. Learn what you don't.
From: Lars Marius Garshol
Subject: Re: Name of function
Date: 
Message-ID: <wkn1xi67qv.fsf@ifi.uio.no>
* ·····@corman.net
| 
| There is a standard way. It is the third returned value of
| FUNCTION-LAMBDA-EXPRESSION.
| 
| The implementation is not required to support this, but if the
| name is available this is a standard way to retrieve it.

Interesting.

Allegro 5.0:

USER(2): (defun test(a)
           (* a a))
TEST
USER(3): (test 24)
576
USER(4): (function-lambda-expression #'test)
(LAMBDA (A) (BLOCK TEST (* A A)))
NIL
TEST
USER(5): 


CLisp:

[1]> (defun test(a) (* a a))
TEST
[2]> (function-lambda-expression #'test)
(LAMBDA (A) (DECLARE (SYSTEM::IN-DEFUN TEST)) (BLOCK TEST (* A A))) ;
#(NIL NIL NIL NIL ((DECLARATION OPTIMIZE DECLARATION))) ;
TEST
[3]>


CMUCL:

* (defun test(a) (* a a))

TEST
* (function-lambda-expression #'test)

(LAMBDA (A) (BLOCK TEST (* A A)))
NIL
TEST
*


--Lars M.
From: Christopher Browne
Subject: Re: Name of function
Date: 
Message-ID: <8mVd3.97283$_m4.1046581@news2.giganews.com>
On Tue, 29 Jun 1999 01:11:16 +0200, Josef Eschgfaeller
<···@felix.unife.it> wrote: 
>(3) It seems that CLISP can only compile to byte-code (with an
>    improvement factor of around 10 on my Linux). Can other systems
>    compile to machine code and what about the difference in speed?

Add in...

On Mon, 28 Jun 1999 15:05:18 GMT, Barry Margolin <······@bbnplanet.com>
wrote: 
>In article <·················@earthlink.net>,
>Llew Thomas  <····@earthlink.net> wrote:
>>Thanks for the answer. McCarthy had said that Lisp should always be able
>>to be broken down into primitives, but also said that this was nearly
>>unworkable due to impracticality (Lisp 1.5 Programmer's Manual, IIRC).
>>It seems to me someone gets a good idea, and then breaks that idea so
>>that it will work. :(
>
>However, I think that you will find that in most Lisp implementations, at
>least half the code is written in Lisp.  So if you get the source code to
>any Lisp implementation (source to GCL and CMUCL are both available) you
>should be able to see their implementors' breakdowns.

How do they tend to be implemented at the lower level?

My question is more directed towards the "portable + native"
implementations than those that aren't, and I'm *not* talking about
CLISP, as, as commented above, it's 'merely' bytecode compiled. 

Contrast (with something I understand better): The GCC "family of
compilers" has a bunch of front-ends that generate the Lisp-like RTL,
which machine-oriented back-ends (that pretty much *are* Lisp code) then
turn into machine-specific code.

Do CL implementations generate some machine-independent representation
(somewhat like RTL) which then gets rendered to machine language? 

Or do they elide that layer, so that the CL primitives have to be
rewritten for each architecture? 
-- 
"We build confusing systems; we haven't done a very good job," Allchin
(Microsoft Sr. VP) admitted. - InfoWorld
········@hex.net- <http://www.hex.net/~cbbrowne/langlisp.html>
From: Duane Rettig
Subject: Re: Name of function
Date: 
Message-ID: <41zeusvpl.fsf@beta.franz.com>
········@news.hex.net (Christopher Browne) writes:

> On Mon, 28 Jun 1999 15:05:18 GMT, Barry Margolin <······@bbnplanet.com>
> wrote: 
> >However, I think that you will find that in most Lisp implementations, at
> >least half the code is written in Lisp.  So if you get the source code to
> >any Lisp implementation (source to GCL and CMUCL are both available) you
> >should be able to see their implementors' breakdowns.
> 
> How do they tend to be implemented at the lower level?

There are various methods of implementation, both in the compiler and
in the functionality that makes up the defined set of CL.

Sometimes the compiler defines the implementation.  For example, in
Allegro CL, the definition for the function CAR is

(defun car (list)
  (car list))

Obviously, this code won't work unless it is compiled.  The actual
implementation if a call to CAR is known by the compiler, and is more
complex. Other times the funcitonality is defined in terms of other,
more primitive functions.

Other funtionality in Allegro CL (what we would call "primcalls")
are written in a pseudo-CL code, which is then compiled by the CL
compiler with some extensions, and creates assembler code native to
the machine.  The source code for this looks something like:

(def-runtime-q new-ratio (num den)
  (let ((rat (q-allocate-heap-other #md-ratio-type-code #md-ratio-size)))
    (setf (ls md-ratio num rat) num)
    (setf (ls md-ratio den rat) den)
    rat))

The last aspect of CL building is the compiler itself.  The Allegro CL
compiler has many phases, which are under constant review.  At the very
least, there is always a rewrite phase, where source code (internally
represented) is rewritten to produce more source code, presumably at
lower levels.  This phase is almost completely architecture-independent.
Another phase takes the source and generates structures which can be
analyzed at a fairly architecture-independent manner, and eventually
translates to more architecture-specific structures, called quads.
Registers are assigned, peephole is done, and an in-lisp assembler
is invoked for the generation of the final bit patterns (or in the
case of primitives, assembler source is generated for later machine-
specific assembly and linking.  It can be said of Allegro CL that the
code generated for the normal lisp codevectors is completely
relocatable (i.e. position independent), so there is never any
need for relocation bits.

> My question is more directed towards the "portable + native"
> implementations than those that aren't, and I'm *not* talking about
> CLISP, as, as commented above, it's 'merely' bytecode compiled. 

The term "portable + native" is really an oxymoron, unless
you are restricting the definition of "native" to travel between
machines of the same architecture.  The byte-compiling of Clisp
allows it to be more portable, but not native, because the
virtual byte engine is at a higher level than the native
architectures of the machines on which it runs.  More native
compilers (like Allegro CL) generate object code that is not
portable, however the object code it generates is native, giving
it more of a chance to match the native speeds of other languages.

> Contrast (with something I understand better): The GCC "family of
> compilers" has a bunch of front-ends that generate the Lisp-like RTL,
> which machine-oriented back-ends (that pretty much *are* Lisp code) then
> turn into machine-specific code.

This is the way that Allegro CL (and some others) works.  Yet another
method used by other lisps is to compile Lisp code to C code, and
to rely on the portable aspects of C so that it acts as sort of a
portable assembler.  And, finally, one compiler called Eclipse will
translate Lisp source code to C source with the intention of
compatibility and intercommunication between Lisp and C.

> Do CL implementations generate some machine-independent representation
> (somewhat like RTL) which then gets rendered to machine language? 

Yes, but these tend to vary per implmentation.  On Allegro CL, you
can see one stage of the implementation by looking at the last stage
before assembly; try (setq comp::*hack-compiler-output* '(foo)) and
then define and compile a function names 'foo and see what happens.

> Or do they elide that layer, so that the CL primitives have to be
> rewritten for each architecture? 

It is a design decision made by each lisp development team. Obviously
code and concept reuse is a high priority, but that must be tempered
by the need for native compilation, which precludes complete reuse.

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Hrvoje Niksic
Subject: Re: Name of function
Date: 
Message-ID: <879092sqcx.fsf@pc-hrvoje.srce.hr>
Duane Rettig <·····@franz.com> writes:

> > My question is more directed towards the "portable + native"
> > implementations than those that aren't, and I'm *not* talking
> > about CLISP, as, as commented above, it's 'merely' bytecode
> > compiled.
> 
> The term "portable + native" is really an oxymoron, unless you are
> restricting the definition of "native" to travel between machines of
> the same architecture.

I believe what Christopher meant by "portable + native" was a native
compiler that supports many different target architectures, presumably
with a framework for adding new targets.  Gcc is usually mentioned as
an example of a "portable native" compiler.
From: Christopher Browne
Subject: Re: Name of function
Date: 
Message-ID: <o0de3.262$VY1.42828@news3.giganews.com>
On 29 Jun 1999 11:33:26 -0700, Duane Rettig <·····@franz.com> wrote:
>········@news.hex.net (Christopher Browne) writes:
>
>> On Mon, 28 Jun 1999 15:05:18 GMT, Barry Margolin <······@bbnplanet.com>
>> wrote: 
>> >However, I think that you will find that in most Lisp implementations, at
>> >least half the code is written in Lisp.  So if you get the source code to
>> >any Lisp implementation (source to GCL and CMUCL are both available) you
>> >should be able to see their implementors' breakdowns.
>> 
>> How do they tend to be implemented at the lower level?
>
>There are various methods of implementation, both in the compiler and
>in the functionality that makes up the defined set of CL.
>
>Sometimes the compiler defines the implementation.  For example, in
>Allegro CL, the definition for the function CAR is
>
>(defun car (list)
>  (car list))
>
>Obviously, this code won't work unless it is compiled.  The actual
>implementation if a call to CAR is known by the compiler, and is more
>complex. Other times the funcitonality is defined in terms of other,
>more primitive functions.

Hmmm...  This is making me think back to FORTH days of "immediate
words," which may not be far from the mark.

>Other funtionality in Allegro CL (what we would call "primcalls")
>are written in a pseudo-CL code, which is then compiled by the CL
>compiler with some extensions, and creates assembler code native to
>the machine.  The source code for this looks something like:
>
>(def-runtime-q new-ratio (num den)
>  (let ((rat (q-allocate-heap-other #md-ratio-type-code #md-ratio-size)))
>    (setf (ls md-ratio num rat) num)
>    (setf (ls md-ratio den rat) den)
>    rat))

I'm a little bit surprised that CAR *isn't* defined this way, but
suppose that there may be some forms/functions where it makes sense to
do code generation differently.  

For instance, functions tend to use CAR, CDR, and the like together, and
by keeping that "high level" (e.g.  - the parsing "mouth" handles its
compilation), it becomes possible to potentially combine of bunch of
operations together. 

Whilst with 'new-ratio', likely used less (I don't recognize it), it
makes more sense to just plain "toss some pseudo-assembler into the
pipeline."

>The last aspect of CL building is the compiler itself.  The Allegro CL
>compiler has many phases, which are under constant review.  At the very
>least, there is always a rewrite phase, where source code (internally
>represented) is rewritten to produce more source code, presumably at
>lower levels.  This phase is almost completely architecture-independent.
>Another phase takes the source and generates structures which can be
>analyzed at a fairly architecture-independent manner, and eventually
>translates to more architecture-specific structures, called quads.
>Registers are assigned, peephole is done, and an in-lisp assembler
>is invoked for the generation of the final bit patterns (or in the
>case of primitives, assembler source is generated for later machine-
>specific assembly and linking.  It can be said of Allegro CL that the
>code generated for the normal lisp codevectors is completely
>relocatable (i.e. position independent), so there is never any
>need for relocation bits.

Not entirely unlike what I'd expect.

>> My question is more directed towards the "portable + native"
>> implementations than those that aren't, and I'm *not* talking about
>> CLISP, as, as commented above, it's 'merely' bytecode compiled. 
>
>The term "portable + native" is really an oxymoron, unless
>you are restricting the definition of "native" to travel between
>machines of the same architecture.  The byte-compiling of Clisp
>allows it to be more portable, but not native, because the
>virtual byte engine is at a higher level than the native
>architectures of the machines on which it runs.  More native
>compilers (like Allegro CL) generate object code that is not
>portable, however the object code it generates is native, giving
>it more of a chance to match the native speeds of other languages.

I was possibly not as precise as I should have been; I was meaning
"portable" in the sense that the version of Lisp is capable of running
on multiple architectures (thus a scheme that has only ever been
implemented for IA-32 would be disallowed), as well as being "native" in
the sense of generating native code for the architecture on which it is
running. 

Thus, CLISP, which handles portability via bytecode, does not qualify. 

Allegro CL, which can generate object code for multiple architectures,
surely does. 

>> Contrast (with something I understand better): The GCC "family of
>> compilers" has a bunch of front-ends that generate the Lisp-like RTL,
>> which machine-oriented back-ends (that pretty much *are* Lisp code) then
>> turn into machine-specific code.
>
>This is the way that Allegro CL (and some others) works.  Yet another
>method used by other lisps is to compile Lisp code to C code, and
>to rely on the portable aspects of C so that it acts as sort of a
>portable assembler.  And, finally, one compiler called Eclipse will
>translate Lisp source code to C source with the intention of
>compatibility and intercommunication between Lisp and C.

Stalin does similar things with Scheme, as do (I think) Hobbit and
Gambit-C.

Generating C is a useful approach in that GCC and other such C compilers
are widely available; it's certainly not as interesting an approach as
creating the object code more directly. 

>> Do CL implementations generate some machine-independent representation
>> (somewhat like RTL) which then gets rendered to machine language? 
>
>Yes, but these tend to vary per implmentation.  On Allegro CL, you
>can see one stage of the implementation by looking at the last stage
>before assembly; try (setq comp::*hack-compiler-output* '(foo)) and
>then define and compile a function names 'foo and see what happens.
>
>> Or do they elide that layer, so that the CL primitives have to be
>> rewritten for each architecture? 
>
>It is a design decision made by each lisp development team. Obviously
>code and concept reuse is a high priority, but that must be tempered
>by the need for native compilation, which precludes complete reuse.

Thanks for the enlightening comments. 
-- 
"Be humble.  A lot happened before you were born."  - Life's Little
Instruction Book
········@ntlug.org- <http://www.hex.net/~cbbrowne/langlisp.html>
From: Duane Rettig
Subject: Re: Name of function
Date: 
Message-ID: <4d7yeifnq.fsf@beta.franz.com>
········@news.hex.net (Christopher Browne) writes:

> On 29 Jun 1999 11:33:26 -0700, Duane Rettig <·····@franz.com> wrote:
> >········@news.hex.net (Christopher Browne) writes:
> >
> >> On Mon, 28 Jun 1999 15:05:18 GMT, Barry Margolin <······@bbnplanet.com>
> >> wrote: 
> >> >However, I think that you will find that in most Lisp implementations, at
> >> >least half the code is written in Lisp.  So if you get the source code to
> >> >any Lisp implementation (source to GCL and CMUCL are both available) you
> >> >should be able to see their implementors' breakdowns.
> >> 
> >> How do they tend to be implemented at the lower level?
> >
> >There are various methods of implementation, both in the compiler and
> >in the functionality that makes up the defined set of CL.
> >
> >Sometimes the compiler defines the implementation.  For example, in
> >Allegro CL, the definition for the function CAR is
> >
> >(defun car (list)
> >  (car list))
> >
> >Obviously, this code won't work unless it is compiled.  The actual
> >implementation if a call to CAR is known by the compiler, and is more
> >complex. Other times the funcitonality is defined in terms of other,
> >more primitive functions.
> 
> Hmmm...  This is making me think back to FORTH days of "immediate
> words," which may not be far from the mark.

Actually, I would consider immediate words to be more similar to
primcalls, because they jump directly to assembler code, instead of
following the Next loop (which, following the analogy. would be similar
to compiling (defun foo (x) (bar x)) [a more boring aspect of
compilation that I didn't bother to discuss in my previous post].

> >Other funtionality in Allegro CL (what we would call "primcalls")
> >are written in a pseudo-CL code, which is then compiled by the CL
> >compiler with some extensions, and creates assembler code native to
> >the machine.  The source code for this looks something like:
> >
> >(def-runtime-q new-ratio (num den)
> >  (let ((rat (q-allocate-heap-other #md-ratio-type-code #md-ratio-size)))
> >    (setf (ls md-ratio num rat) num)
> >    (setf (ls md-ratio den rat) den)
> >    rat))
> 
> I'm a little bit surprised that CAR *isn't* defined this way, but
> suppose that there may be some forms/functions where it makes sense to
> do code generation differently.  

One design goal in Allegro is as much as possible to concentrate all of
the knowledge of any particular functionality in one location, to
avoid redundancy.  Thus, the intelligence of "how to CAR" is in the
compiler alone, and everywhere that the CAR operation is needed, the
same functionality (modified by speed/safety issues) is generated by the
compiler in the same way, including the function whose name happens to be
CAR (but, perhaps surprisingly, is very seldom called, because the
compiler very seldom generates calls to the CAR function, but instead
generates its own code).

Ironically, CAR is a very bad example for me to have used, since on
x86 architectures the compilation of CAR with high safety actually
does result in a very quick (3 bytes) call to a primitive called
"qcar".  This is because the x86 does not do alignment traps well,
and so testing for non-cons using hardware (like we do on most RISC
architectures) is not possible.  However, if you think of the
"call *[edi+87]" as simply an instruction, instead of a micro-subroutine
call, it becomes similar to RISC ports, where on the Alpha for example
the instruction generated is "ldl r16,-1(r16)" which not only does the
car access but also signals a nice error condition when the object
being accessed is not a cons or nil (an exercise to the reader, or,
for those with long memories, a lookup, as to how this is done).

> For instance, functions tend to use CAR, CDR, and the like together, and
> by keeping that "high level" (e.g.  - the parsing "mouth" handles its
> compilation), it becomes possible to potentially combine of bunch of
> operations together. 

I'm not sure what you mean here by "parsing mouth", but this reminds me
of one thing I didn't think of mentioning in the compilation process,
mostly because I don't usually think of it as part of compilation when
thinking about lisp, and that is the reader.  Lisp languages are unique
(and not all lisp-like languages share this aspect) in that the reader
serves as preprocessor for all of the language's parsing needs -
compilation, interpretation, and it even can be called by user code.
The compiler doesn't look at text, but at already-read lisp structure
directly.

My paragraph probably has nothing to do with yours, but I'm sure you
will explain what it is you mean.

> Whilst with 'new-ratio', likely used less (I don't recognize it), it
> makes more sense to just plain "toss some pseudo-assembler into the
> pipeline."

As Pierre Mai said, this is just a routine (that I randomly selected
as an example) that puts together a ratio object when it is needed.
You are correct in saying that it is not needed as much, and thus the
intelligence as to how to put together a ratio is best expressed in a
function (a primitive in this case) that is in fact called when needed,
rather than being expanded inline.




> >> My question is more directed towards the "portable + native"
> >> implementations than those that aren't, and I'm *not* talking about
> >> CLISP, as, as commented above, it's 'merely' bytecode compiled. 
> >
> >The term "portable + native" is really an oxymoron, unless
> >you are restricting the definition of "native" to travel between
> >machines of the same architecture.  The byte-compiling of Clisp
> >allows it to be more portable, but not native, because the
> >virtual byte engine is at a higher level than the native
> >architectures of the machines on which it runs.  More native
> >compilers (like Allegro CL) generate object code that is not
> >portable, however the object code it generates is native, giving
> >it more of a chance to match the native speeds of other languages.
> 
> I was possibly not as precise as I should have been; I was meaning
> "portable" in the sense that the version of Lisp is capable of running
> on multiple architectures (thus a scheme that has only ever been
> implemented for IA-32 would be disallowed), as well as being "native" in
> the sense of generating native code for the architecture on which it is
> running. 

So you mean port-able, in the sense of "being easily ported".  This is
what Hrvoje Niksic has guessed you had meant, and I had taken it to mean
either the dictionary definition (in Websters 9th: "Capable of being
carried or moved about") or in the CommonLisp sense ["portable, adj. (of
code) required to produce equivalent results and observable side effects
in all conforming implementations"], meaning that objects worked on by the
lisp can be moved around, rather than the lisps themselves.

> Thus, CLISP, which handles portability via bytecode, does not qualify. 
> 
> Allegro CL, which can generate object code for multiple architectures,
> surely does. 

I suppose so, but I would caution you not to take the linux version
of Allegro CL and try to run that on a sparc :-)

> Generating C is a useful approach in that GCC and other such C compilers
> are widely available; it's certainly not as interesting an approach as
> creating the object code more directly. 

And it tends not to generate as efficient code, since C is a very poor
assembler for Lisp.

> Thanks for the enlightening comments. 

You're welcome.

> -- 
> "Be humble.  A lot happened before you were born."  - Life's Little
> Instruction Book

I don't know about that; I was born at a very early age.


> ········@ntlug.org- <http://www.hex.net/~cbbrowne/langlisp.html>

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Teemu Kalvas
Subject: Re: Name of function
Date: 
Message-ID: <m0wvwiafzh.fsf@kalahari.s2.org>
Duane Rettig <·····@franz.com> writes:

[snip]
> One design goal in Allegro is as much as possible to concentrate all of
> the knowledge of any particular functionality in one location, to
> avoid redundancy.  Thus, the intelligence of "how to CAR" is in the
> compiler alone, and everywhere that the CAR operation is needed, the
> same functionality (modified by speed/safety issues) is generated by the
> compiler in the same way, including the function whose name happens to be
> CAR (but, perhaps surprisingly, is very seldom called, because the
> compiler very seldom generates calls to the CAR function, but instead
> generates its own code).
> 
> Ironically, CAR is a very bad example for me to have used, since on
> x86 architectures the compilation of CAR with high safety actually
> does result in a very quick (3 bytes) call to a primitive called
> "qcar".  This is because the x86 does not do alignment traps well,
> and so testing for non-cons using hardware (like we do on most RISC
> architectures) is not possible.  However, if you think of the
> "call *[edi+87]" as simply an instruction, instead of a micro-subroutine
> call, it becomes similar to RISC ports, where on the Alpha for example
> the instruction generated is "ldl r16,-1(r16)" which not only does the
> car access but also signals a nice error condition when the object
> being accessed is not a cons or nil (an exercise to the reader, or,
> for those with long memories, a lookup, as to how this is done).

This got me somewhat interested as I recalled having read about
alignment checks on the i386 architecture a long time ago.  By
default, alignment checking is not enabled; to enable it, you need to
have the AM bit in CR0 set (settable only by systems software), the AC
bit in the EFLAGS register set (settable by any process and maintained
by a per process basis), and be running at priviledge level 3 (which
basically means user mode).  So far this is more or less operating
system independent.

However, the AM bit, which only the operating system can set, is only
set under one operating system I inspected, Linux.  Under WinNT and
{Open,Net,Free}BSD it is unset, so no matter what the user process
does, alignment checks cannot be enabled.  A shame, that, as with that
access checking, CAR could be compiled into a single instruction even
on i386, and presumably several other primitives would be quite simple
in machine code with a similar method.

It is relatively easy to modify the kernels or make a device driver
that sets the AM bit in the control register, but that is not really
good enough, you cannot really go about distributing a Lisp
environment which requires a kernel modification.  Perhaps as a
toggleable option though... :-)

-- 
Teemu Kalvas
From: Duane Rettig
Subject: Re: Name of function
Date: 
Message-ID: <4hfnmplc9.fsf@beta.franz.com>
Teemu Kalvas <·····@s2.org> writes:

> Duane Rettig <·····@franz.com> writes:
> 
> > Ironically, CAR is a very bad example for me to have used, since on
> > x86 architectures the compilation of CAR with high safety actually
> > does result in a very quick (3 bytes) call to a primitive called
> > "qcar".  This is because the x86 does not do alignment traps well,
> > and so testing for non-cons using hardware (like we do on most RISC
> > architectures) is not possible.  However, if you think of the
> > "call *[edi+87]" as simply an instruction, instead of a micro-subroutine
> > call, it becomes similar to RISC ports, where on the Alpha for example
> > the instruction generated is "ldl r16,-1(r16)" which not only does the
> > car access but also signals a nice error condition when the object
> > being accessed is not a cons or nil (an exercise to the reader, or,
> > for those with long memories, a lookup, as to how this is done).
> 
> This got me somewhat interested as I recalled having read about
> alignment checks on the i386 architecture a long time ago.  By
> default, alignment checking is not enabled; to enable it, you need to
> have the AM bit in CR0 set (settable only by systems software), the AC
> bit in the EFLAGS register set (settable by any process and maintained
> by a per process basis), and be running at priviledge level 3 (which
> basically means user mode).  So far this is more or less operating
> system independent.
> 
> However, the AM bit, which only the operating system can set, is only
> set under one operating system I inspected, Linux.  Under WinNT and
> {Open,Net,Free}BSD it is unset, so no matter what the user process
> does, alignment checks cannot be enabled.

Yes, we found that out while doing the Linux and NT ports, although
we had never looked at whether *BSDs had set it, because by the time
we did FreeBSD, we had decided not to bother for several reasons
I'll list below.

>  A shame, that, as with that
> access checking, CAR could be compiled into a single instruction even
> on i386,

I'm not sure if you are using the term "i386" in general to mean "x86",
but if you really mean "386", your statement isn't quite correct; the
exception #17 (alignment check) wasn't implemented until the 486, as
was the CR0[AM] bit and the FLAGS[AC] bit, so this kind of compilation
wouldn't have worked until 486s became the base-level requirement for
the os.  This is true on at least NT; I am not sure about Linux/*BSDs.

>       and presumably several other primitives would be quite simple
> in machine code with a similar method.

Like symbol-{name,value,package...}

> It is relatively easy to modify the kernels or make a device driver
> that sets the AM bit in the control register, but that is not really
> good enough, you cannot really go about distributing a Lisp
> environment which requires a kernel modification.  Perhaps as a
> toggleable option though... :-)

Care needs to be given there, though.  A little story: When we first
started the port to the DEC (at that time) Alpha/NT, we looked into
whether alignment checks could be used on NT, as they were, of course,
on DecUnix (at that time).  They were not; the operating system was setup
by default to "fixup" alignment traps (i.e. paste together the 4 bytes
at that address, even though the bytes come from different words).  There
was a universal flag that could turn off the fixups (i.e. allow the
alignment traps through as Access Violations) and individual programs
could then call a function to turn on the fixups (i.e. suppress traps)
for that program execution.  But this is exactly backwards; what we
wanted was to be able to leave the system state alone, and simply
turn off fixups in our lisp.   Well, I tried toggling the system
parameter to not do fixups, and most of the system worked, except
for WinHelp, which died immediately with an access violation.
So we do the longhand check before each car/cdr access on the Alpha/NT.

Of course, single-load-instruction CAR/CDR access is always possible on
any of our lisps; all yu have to do is to set the optimization settings
so that the comp:verify-car-cdr-switch returns nil (usually at speed=3
and safety = 0 or 1) and you'll get that compilation.  You can even
experiment on Linux with setting the flags word and getting the
alignment trap (which I believe should still work; at least our trap
handlers still have the code to recognize and handle the alignment error
when given it on Linux).

This brings me finally to my list of reasons why we decided not to
try to use the alignmnet trap architecture on any of the x86 ports
of Allegro CL:

 - Even if we were able to get the CR0[AM] flag set on NT, we were
   worried about what that would do to other programs in the system
   [a fear that is borne out in my little story, above]
 - We wanted to try to keep .fasl code compatibility between all
   x86 boxes, which precluded relying on traps occurring. [Ironically,
   due to other NT/win95/win98 issues, the compilations diverged as
   is was, causing fasl files compiled on Linux or Freebsd not to be
   runnable on Windows, though the other way around works, mostly]
 - Even on Linux, where the CR0[AM] bit is set, the FLAGS[AC] bit would
   have to be controlled (i.e. always set when entering lisp execution,
   and always cleared when exitting lisp execution, so as not to mess up
   C programs that might count on fixups).  This would not necessarily
   be easy, since any random popf instruction would set the whole flags
   word, possibly setting the AC bit to the wrong value.
 - At the time we did the first Linux port (several years ago) 386 was
   the base level of chip that could be run.  I don't know if there are
   any restrictions made by linux at this time; it would be interesting
   to find out if anyone at all runs a linux on a 386 (don't be surprised;
   I still run SunOS4 on a sun3!-)

-- 
Duane Rettig          Franz Inc.            http://www.franz.com/ (www)
1995 University Ave Suite 275  Berkeley, CA 94704
Phone: (510) 548-3600; FAX: (510) 548-8253   ·····@Franz.COM (internet)
From: Pierre R. Mai
Subject: Re: Name of function
Date: 
Message-ID: <87so7afq4p.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@news.hex.net (Christopher Browne) writes:

> >Other funtionality in Allegro CL (what we would call "primcalls")
> >are written in a pseudo-CL code, which is then compiled by the CL
> >compiler with some extensions, and creates assembler code native to
> >the machine.  The source code for this looks something like:
> >
> >(def-runtime-q new-ratio (num den)
> >  (let ((rat (q-allocate-heap-other #md-ratio-type-code #md-ratio-size)))
> >    (setf (ls md-ratio num rat) num)
> >    (setf (ls md-ratio den rat) den)
> >    rat))
[...]
> Whilst with 'new-ratio', likely used less (I don't recognize it), it
> makes more sense to just plain "toss some pseudo-assembler into the
> pipeline."

new-ratio is probably used to allocate and initialize new ratio
(rational numbers represented by a pair of numerator and denominator),
a common subprimitive for the implementation of many operations on
numbers.

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Raymond Toy
Subject: Re: Name of function
Date: 
Message-ID: <4nbtdzjjwx.fsf@rtp.ericsson.se>
>>>>> "Christopher" == Christopher Browne <········@news.hex.net> writes:
    Christopher> Do CL implementations generate some
    Christopher> machine-independent representation (somewhat like
    Christopher> RTL) which then gets rendered to machine language?

I think CMUCL does something like this, but I'm not really sure..
Take a look at the internals guide for CMUCL.  It's incomplete, but
does partially describe how CMUCL works.

    Christopher> Or do they elide that layer, so that the CL
    Christopher> primitives have to be rewritten for each
    Christopher> architecture?

For CMUCL, you have to do describe the assembly language/machine code
to the compiler and then implement a large number of "VOPS" (virtual
ops?) that implement the "primitives".  Everything else is built up
from these VOPS.

Ray
From: Pierre R. Mai
Subject: CL implementation techniques (was Re: Name of function)
Date: 
Message-ID: <877longlft.fsf@orion.dent.isdn.cs.tu-berlin.de>
········@news.hex.net (Christopher Browne) writes:

> How do they tend to be implemented at the lower level?
> 
> My question is more directed towards the "portable + native"
> implementations than those that aren't, and I'm *not* talking about
> CLISP, as, as commented above, it's 'merely' bytecode compiled. 
> 
> Contrast (with something I understand better): The GCC "family of
> compilers" has a bunch of front-ends that generate the Lisp-like RTL,
> which machine-oriented back-ends (that pretty much *are* Lisp code) then
> turn into machine-specific code.

IIRC, in former times, the front-ends did generate only an intermediate
tree representation, but this seems to have changed (at least with
egcs), with RTL generation now folded into the front-ends (?!?).  Well,
there is still this quote from the info file:

    People frequently have the idea of using RTL stored as text in a
 file as an interface between a language front end and the bulk of GNU
 CC.  This idea is not feasible.

    GNU CC was designed to use RTL internally only.  Correct RTL for a
 given program is very dependent on the particular target machine.  And
 the RTL does not contain all the information about the program.

    The proper way to interface GNU CC to a new language front end is
 with the "tree" data structure.  There is no manual for this data
 structure, but it is described in the files `tree.h' and `tree.def'.

> Do CL implementations generate some machine-independent representation
> (somewhat like RTL) which then gets rendered to machine language? 

RTL is not machine-independent in the way that e.g. the JVM code is.
The RTL provides a uniform representation for the intermediate code,
but the RTL generated for different backends differs, because the
machine-description (i.e. the RTL meta-data so to speak) differs.
This offers the compiler much better opportunities to select the best
low-level operations to use, since it can do it much earlier, when
there is still much high-level information around to aid the
decission.  Contrast this with the JVM, where currently legions of
PhDs are being gained by finding ways of reconstructing high-level
information from JVM code, to help the poor JIT compiler in compiling
JVM code to native code efficiently.

So most ("modern", high-quality) compilers I know of avoid using a
truly machine-independent low-level intermediate representation,
except for special cases.

> Or do they elide that layer, so that the CL primitives have to be
> rewritten for each architecture? 

As to the general structure of CL compilers: I think that CMU CL is in
some ways fairly typical of a portable native compiler, and there is
at least some documentation as to its structure, so that's where I'd
start looking.  Here is a short excerpt of the compiler-overview.tex
file that comes with the source code (bear in mind that many details
and choices of representations are CMU CL specific, as is the heavy
emphasis on type- analysis and -checking):

---------------------------------------------------------------------

\chapter{Compiler Overview} % -*- Dictionary: design -*-

The structure of the compiler may be broadly characterized by describing the
compilation phases and the data structures that they manipulate.  The steps in
the compilation are called phases rather than passes since they don't
necessarily involve a full pass over the code.  The data structure used to
represent the code at some point is called an {\it intermediate
representation.}

Two major intermediate representations are used in the compiler:
\begin{itemize}

\item The Implicit Continuation Representation (ICR) represents the lisp-level
semantics of the source code during the initial phases.  Partial evaluation and
semantic analysis are done on this representation.  ICR is roughly equivalent
to a subset of Common Lisp, but is represented as a flow-graph rather than a
syntax tree.  Phases which only manipulate ICR comprise the "front end".  It
would be possible to use a different back end such as one that directly
generated code for a stack machine.

\item The Virtual Machine Representation (VMR) represents the implementation of
the source code on a virtual machine.  The virtual machine may vary depending
on the the target hardware, but VMR is sufficiently stylized that most of the
phases which manipulate it are portable.
\end{itemize}

Each phase is briefly described here.  The phases from ``local call analysis''
to ``constraint propagation'' all interact; for maximum optimization, they
are generally repeated until nothing new is discovered.  The source files which
primarily contain each phase are listed after ``Files: ''.
\begin{description}

\item[ICR conversion]
Convert the source into ICR, doing macroexpansion and simple source-to-source
transformation.  All names are resolved at this time, so we don't have to worry
about name conflicts later on.  Files: {\tt ir1tran, srctran, typetran}

\item[Local call analysis] Find calls to local functions and convert them to
local calls to the correct entry point, doing keyword parsing, etc.  Recognize
once-called functions as lets.  Create {\it external entry points} for
entry-point functions.  Files: {\tt locall}

\item[Find components]
Find flow graph components and compute depth-first ordering.  Separate
top-level code from run-time code, and determine which components are top-level
components.  Files: {\tt dfo}

\item[ICR optimize] A grab-bag of all the non-flow ICR optimizations.  Fold
constant functions, propagate types and eliminate code that computes unused
values.  Special-case calls to some known global functions by replacing them
with a computed function.  Merge blocks and eliminate IF-IFs.  Substitute let
variables.  Files: {\tt ir1opt, ir1tran, typetran, seqtran, vm/vm-tran}

\item[Type constraint propagation]
Use global flow analysis to propagate information about lexical variable
types.   Eliminate unnecessary type checks and tests.  Files: {\tt constraint}

\item[Type check generation]
Emit explicit ICR code for any necessary type checks that are too complex to be
easily generated on the fly by the back end.  Files: {\tt checkgen}

\item[Event driven operations]
Various parts of ICR are incrementally recomputed, either eagerly on
modification of the ICR, or lazily, when the relevant information is needed.
\begin{itemize}
\item Check that type assertions are satisfied, marking places where type
checks need to be done.

\item Locate let calls.

\item Delete functions and variables with no references
\end{itemize}
Files: {\tt ir1util}, {\tt ir1opt}

\item[ICR finalize]
This phase is run after all components have been compiled.  It scans the
global variable references, looking for references to undefined variables
and incompatible function redefinitions.  Files: {\tt ir1final}, {\tt main}.

\item[Environment analysis]
Determine which distinct environments need to be allocated, and what
context needed to be closed over by each environment.  We detect non-local
exits and set closure variables.  We also emit cleanup code as funny
function calls.  This is the last pure ICR pass.  Files: {\tt envanal}

\item[Global TN allocation (GTN)]
Iterate over all defined functions, determining calling conventions
and assigning TNs to local variables.  Files: {\tt gtn}

\item[Local TN allocation (LTN)]
Use type and policy information to determine which VMR translation to use
for known functions, and then create TNs for expression evaluation
temporaries.  We also accumulate some random information needed by VMR
conversion.  Files: {\tt ltn}

\item[Control analysis]
Linearize the flow graph in a way that minimizes the number of branches.  The
block-level structure of the flow graph is basically frozen at this point.
Files: {\tt control}

\item[Stack analysis]
Maintain stack discipline for unknown-values continuation in the presence
of local exits.  Files: {\tt stack}

\item[Entry analysis]
Collect some back-end information for each externally callable function.

\item[VMR conversion] Convert ICR into VMR by translating nodes into VOPs.
Emit type checks.  Files: {\tt ir2tran, vmdef}

\item[Copy propagation] Use flow analysis to eliminate unnecessary copying of
TN values.  Files: {\tt copyprop}

\item[Representation selection]
Look at all references to each TN to determine which representation has the
lowest cost.  Emit appropriate move and coerce VOPS for that representation.

\item[Lifetime analysis]
Do flow analysis to find the set of TNs whose lifetimes 
overlap with the lifetimes of each TN being packed.  Annotate call VOPs with
the TNs that need to be saved.  Files: {\tt life}

\item[Pack]
Find a legal register allocation, attempting to minimize unnecessary moves.
Files: {\tt pack}

\item[Code generation]
Call the VOP generators to emit assembly code.  Files: {\tt codegen}

\item[Pipeline reorganization] On some machines, move memory references
backward in the code so that they can overlap with computation.  On machines
with delayed branch instructions, locate instructions that can be moved into
delay slots.  Files: {\tt assem-opt}

\item[Assembly]
Resolve branches and convert in to object code and fixup information.
Files: {\tt assembler}

\item[Dumping] Convert the compiled code into an object file or in-core
function.  Files: {\tt debug-dump}, {\tt dump}, {\tt vm/core}

\end{description}

---------------------------------------------------------------------

Another short exceprt wrt to special forms:

---------------------------------------------------------------------

\chapter{The Implicit Continuation Representation}

The set of special forms recognized is exactly that specified in the Common
Lisp manual.  Everything that is described as a macro in CLTL is a macro.

Large amounts of syntactic information are thrown away by the conversion to an
anonymous flow graph representation.  The elimination of names eliminates the
need to represent most environment manipulation special forms.  The explicit
representation of control eliminates the need to represent BLOCK and GO, and
makes flow analysis easy.  The full Common Lisp LAMBDA is implemented with a
simple fixed-arg lambda, which greatly simplifies later code.
      
The elimination of syntactic information eliminates the need for most of the
"beta transformation" optimizations in Rabbit.  There are no progns, no
tagbodys and no returns.  There are no "close parens" which get in the way of
determining which node receives a given value.

---------------------------------------------------------------------

For more interesting documentation see the sources for CMU CL,
available at your local Debian site, or at http://www.cons.org/cmucl/

Regs, Pierre.

-- 
Pierre Mai <····@acm.org>         PGP and GPG keys at your nearest Keyserver
  "One smaller motivation which, in part, stems from altruism is Microsoft-
   bashing." [Microsoft memo, see http://www.opensource.org/halloween1.html]
From: Christopher Browne
Subject: Re: Name of function
Date: 
Message-ID: <fyVd3.97314$_m4.1047183@news2.giganews.com>
On Tue, 29 Jun 1999 01:11:16 +0200, Josef Eschgfaeller
<···@felix.unife.it> wrote: 
>(3) It seems that CLISP can only compile to byte-code (with an
>    improvement factor of around 10 on my Linux). Can other systems
>    compile to machine code and what about the difference in speed?

Add in...

On Mon, 28 Jun 1999 15:05:18 GMT, Barry Margolin <······@bbnplanet.com>
wrote: 
>In article <·················@earthlink.net>,
>Llew Thomas  <····@earthlink.net> wrote:
>>Thanks for the answer. McCarthy had said that Lisp should always be able
>>to be broken down into primitives, but also said that this was nearly
>>unworkable due to impracticality (Lisp 1.5 Programmer's Manual, IIRC).
>>It seems to me someone gets a good idea, and then breaks that idea so
>>that it will work. :(
>
>However, I think that you will find that in most Lisp implementations, at
>least half the code is written in Lisp.  So if you get the source code to
>any Lisp implementation (source to GCL and CMUCL are both available) you
>should be able to see their implementors' breakdowns.

How do they tend to be implemented at the lower level?

My question is more directed towards the "portable + native"
implementations than those that aren't, and I'm *not* talking about
CLISP, as, as commented above, it's 'merely' bytecode compiled. 

Contrast (with something I understand better): The GCC "family of
compilers" has a bunch of front-ends that generate the Lisp-like RTL,
which machine-oriented back-ends (that pretty much *are* Lisp code) then
turn into machine-specific code.

Do CL implementations generate some machine-independent representation
(somewhat like RTL) which then gets rendered to machine language? 

Or do they elide that layer, so that the CL primitives have to be
rewritten for each architecture? 
-- 
"We build confusing systems; we haven't done a very good job," Allchin
(Microsoft Sr. VP) admitted. - InfoWorld
········@hex.net- <http://www.hex.net/~cbbrowne/langlisp.html>
From: Barry Margolin
Subject: Re: Name of function
Date: 
Message-ID: <VR5e3.1008$KM3.238757@burlma1-snr2>
In article <·······················@news2.giganews.com>,
Christopher Browne <········@hex.net> wrote:
>Do CL implementations generate some machine-independent representation
>(somewhat like RTL) which then gets rendered to machine language? 

Others have answered this question about some compilers.

There's also a class of CL compilers that generate C code as their
"machine-independent representation", and then use the system's C compiler
as their back-end.  GCL (formerly KCL) is an example of this.

-- 
Barry Margolin, ······@bbnplanet.com
GTE Internetworking, Powered by BBN, Burlington, MA
*** DON'T SEND TECHNICAL QUESTIONS DIRECTLY TO ME, post them to newsgroups.
Please DON'T copy followups to me -- I'll assume it wasn't posted to the group.